00:00:00.000 Started by upstream project "autotest-per-patch" build number 132568 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.024 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:02.867 The recommended git tool is: git 00:00:02.868 using credential 00000000-0000-0000-0000-000000000002 00:00:02.870 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:02.886 Fetching changes from the remote Git repository 00:00:02.889 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:02.901 Using shallow fetch with depth 1 00:00:02.901 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:02.902 > git --version # timeout=10 00:00:02.912 > git --version # 'git version 2.39.2' 00:00:02.912 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:02.925 Setting http proxy: proxy-dmz.intel.com:911 00:00:02.925 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.516 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.528 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.540 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.540 > git config core.sparsecheckout # timeout=10 00:00:08.551 > git read-tree -mu HEAD # timeout=10 00:00:08.569 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.596 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.596 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.677 [Pipeline] Start of Pipeline 00:00:08.693 [Pipeline] library 00:00:08.695 Loading library shm_lib@master 00:00:08.695 Library shm_lib@master is cached. Copying from home. 00:00:08.711 [Pipeline] node 00:00:08.720 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:08.721 [Pipeline] { 00:00:08.730 [Pipeline] catchError 00:00:08.731 [Pipeline] { 00:00:08.742 [Pipeline] wrap 00:00:08.750 [Pipeline] { 00:00:08.758 [Pipeline] stage 00:00:08.760 [Pipeline] { (Prologue) 00:00:08.975 [Pipeline] sh 00:00:09.259 + logger -p user.info -t JENKINS-CI 00:00:09.280 [Pipeline] echo 00:00:09.282 Node: CYP9 00:00:09.288 [Pipeline] sh 00:00:09.598 [Pipeline] setCustomBuildProperty 00:00:09.608 [Pipeline] echo 00:00:09.610 Cleanup processes 00:00:09.614 [Pipeline] sh 00:00:09.903 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.903 2063580 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.919 [Pipeline] sh 00:00:10.211 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.211 ++ grep -v 'sudo pgrep' 00:00:10.211 ++ awk '{print $1}' 00:00:10.211 + sudo kill -9 00:00:10.211 + true 00:00:10.225 [Pipeline] cleanWs 00:00:10.236 [WS-CLEANUP] Deleting project workspace... 00:00:10.237 [WS-CLEANUP] Deferred wipeout is used... 00:00:10.244 [WS-CLEANUP] done 00:00:10.247 [Pipeline] setCustomBuildProperty 00:00:10.259 [Pipeline] sh 00:00:10.543 + sudo git config --global --replace-all safe.directory '*' 00:00:10.636 [Pipeline] httpRequest 00:00:10.928 [Pipeline] echo 00:00:10.929 Sorcerer 10.211.164.20 is alive 00:00:10.937 [Pipeline] retry 00:00:10.939 [Pipeline] { 00:00:10.952 [Pipeline] httpRequest 00:00:10.957 HttpMethod: GET 00:00:10.957 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.958 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.966 Response Code: HTTP/1.1 200 OK 00:00:10.967 Success: Status code 200 is in the accepted range: 200,404 00:00:10.967 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:20.741 [Pipeline] } 00:00:20.759 [Pipeline] // retry 00:00:20.767 [Pipeline] sh 00:00:21.058 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:21.075 [Pipeline] httpRequest 00:00:21.427 [Pipeline] echo 00:00:21.428 Sorcerer 10.211.164.20 is alive 00:00:21.437 [Pipeline] retry 00:00:21.439 [Pipeline] { 00:00:21.451 [Pipeline] httpRequest 00:00:21.456 HttpMethod: GET 00:00:21.457 URL: http://10.211.164.20/packages/spdk_4915847b4874605465d8d187c4b368db688baa60.tar.gz 00:00:21.457 Sending request to url: http://10.211.164.20/packages/spdk_4915847b4874605465d8d187c4b368db688baa60.tar.gz 00:00:21.463 Response Code: HTTP/1.1 200 OK 00:00:21.463 Success: Status code 200 is in the accepted range: 200,404 00:00:21.463 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_4915847b4874605465d8d187c4b368db688baa60.tar.gz 00:04:56.010 [Pipeline] } 00:04:56.031 [Pipeline] // retry 00:04:56.039 [Pipeline] sh 00:04:56.331 + tar --no-same-owner -xf spdk_4915847b4874605465d8d187c4b368db688baa60.tar.gz 00:04:59.650 [Pipeline] sh 00:04:59.942 + git -C spdk log --oneline -n5 00:04:59.942 4915847b4 lib/nvmf: create pollers for each transport poll group 00:04:59.942 345c51d49 nvmf/tcp: remove await_req TAILQ 00:04:59.942 e286d3c2f nvmf/tcp: add nvmf_tcp_qpair_process() helper function 00:04:59.942 e9dea99c0 nvmf/tcp: simplify nvmf_tcp_poll_group_poll event counting 00:04:59.942 2f2acf4eb doc: move nvmf_tracing.md to tracing.md 00:04:59.953 [Pipeline] } 00:04:59.967 [Pipeline] // stage 00:04:59.979 [Pipeline] stage 00:04:59.982 [Pipeline] { (Prepare) 00:04:59.995 [Pipeline] writeFile 00:05:00.010 [Pipeline] sh 00:05:00.297 + logger -p user.info -t JENKINS-CI 00:05:00.310 [Pipeline] sh 00:05:00.596 + logger -p user.info -t JENKINS-CI 00:05:00.610 [Pipeline] sh 00:05:00.899 + cat autorun-spdk.conf 00:05:00.899 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:00.899 SPDK_TEST_NVMF=1 00:05:00.899 SPDK_TEST_NVME_CLI=1 00:05:00.899 SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:00.899 SPDK_TEST_NVMF_NICS=e810 00:05:00.899 SPDK_TEST_VFIOUSER=1 00:05:00.899 SPDK_RUN_UBSAN=1 00:05:00.899 NET_TYPE=phy 00:05:00.907 RUN_NIGHTLY=0 00:05:00.912 [Pipeline] readFile 00:05:00.936 [Pipeline] withEnv 00:05:00.938 [Pipeline] { 00:05:00.949 [Pipeline] sh 00:05:01.238 + set -ex 00:05:01.238 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:05:01.238 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:01.238 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:01.238 ++ SPDK_TEST_NVMF=1 00:05:01.238 ++ SPDK_TEST_NVME_CLI=1 00:05:01.238 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:01.238 ++ SPDK_TEST_NVMF_NICS=e810 00:05:01.238 ++ SPDK_TEST_VFIOUSER=1 00:05:01.238 ++ SPDK_RUN_UBSAN=1 00:05:01.238 ++ NET_TYPE=phy 00:05:01.238 ++ RUN_NIGHTLY=0 00:05:01.238 + case $SPDK_TEST_NVMF_NICS in 00:05:01.238 + DRIVERS=ice 00:05:01.238 + [[ tcp == \r\d\m\a ]] 00:05:01.238 + [[ -n ice ]] 00:05:01.238 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:05:01.238 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:05:01.238 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:05:01.238 rmmod: ERROR: Module irdma is not currently loaded 00:05:01.238 rmmod: ERROR: Module i40iw is not currently loaded 00:05:01.238 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:05:01.238 + true 00:05:01.238 + for D in $DRIVERS 00:05:01.238 + sudo modprobe ice 00:05:01.238 + exit 0 00:05:01.248 [Pipeline] } 00:05:01.263 [Pipeline] // withEnv 00:05:01.277 [Pipeline] } 00:05:01.295 [Pipeline] // stage 00:05:01.301 [Pipeline] catchError 00:05:01.302 [Pipeline] { 00:05:01.311 [Pipeline] timeout 00:05:01.311 Timeout set to expire in 1 hr 0 min 00:05:01.312 [Pipeline] { 00:05:01.320 [Pipeline] stage 00:05:01.321 [Pipeline] { (Tests) 00:05:01.330 [Pipeline] sh 00:05:01.615 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:01.615 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:01.615 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:01.615 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:05:01.615 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:01.615 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:05:01.615 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:05:01.615 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:05:01.615 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:05:01.615 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:05:01.615 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:05:01.615 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:01.615 + source /etc/os-release 00:05:01.615 ++ NAME='Fedora Linux' 00:05:01.615 ++ VERSION='39 (Cloud Edition)' 00:05:01.615 ++ ID=fedora 00:05:01.615 ++ VERSION_ID=39 00:05:01.615 ++ VERSION_CODENAME= 00:05:01.615 ++ PLATFORM_ID=platform:f39 00:05:01.615 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:05:01.615 ++ ANSI_COLOR='0;38;2;60;110;180' 00:05:01.615 ++ LOGO=fedora-logo-icon 00:05:01.615 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:05:01.615 ++ HOME_URL=https://fedoraproject.org/ 00:05:01.615 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:05:01.615 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:05:01.615 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:05:01.615 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:05:01.615 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:05:01.615 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:05:01.615 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:05:01.615 ++ SUPPORT_END=2024-11-12 00:05:01.615 ++ VARIANT='Cloud Edition' 00:05:01.615 ++ VARIANT_ID=cloud 00:05:01.615 + uname -a 00:05:01.615 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:05:01.615 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:04.919 Hugepages 00:05:04.919 node hugesize free / total 00:05:04.919 node0 1048576kB 0 / 0 00:05:04.919 node0 2048kB 0 / 0 00:05:04.919 node1 1048576kB 0 / 0 00:05:04.919 node1 2048kB 0 / 0 00:05:04.919 00:05:04.919 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:04.919 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:05:04.919 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:05:04.919 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:05:04.919 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:05:04.919 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:05:04.919 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:05:04.919 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:05:04.919 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:05:04.919 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:05:04.919 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:05:04.919 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:05:04.919 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:05:04.919 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:05:04.919 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:05:04.920 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:05:04.920 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:05:04.920 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:05:04.920 + rm -f /tmp/spdk-ld-path 00:05:04.920 + source autorun-spdk.conf 00:05:04.920 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:04.920 ++ SPDK_TEST_NVMF=1 00:05:04.920 ++ SPDK_TEST_NVME_CLI=1 00:05:04.920 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:04.920 ++ SPDK_TEST_NVMF_NICS=e810 00:05:04.920 ++ SPDK_TEST_VFIOUSER=1 00:05:04.920 ++ SPDK_RUN_UBSAN=1 00:05:04.920 ++ NET_TYPE=phy 00:05:04.920 ++ RUN_NIGHTLY=0 00:05:04.920 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:05:04.920 + [[ -n '' ]] 00:05:04.920 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:04.920 + for M in /var/spdk/build-*-manifest.txt 00:05:04.920 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:05:04.920 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:05:04.920 + for M in /var/spdk/build-*-manifest.txt 00:05:04.920 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:05:04.920 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:05:04.920 + for M in /var/spdk/build-*-manifest.txt 00:05:04.920 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:05:04.920 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:05:04.920 ++ uname 00:05:04.920 + [[ Linux == \L\i\n\u\x ]] 00:05:04.920 + sudo dmesg -T 00:05:04.920 + sudo dmesg --clear 00:05:04.920 + dmesg_pid=2065726 00:05:04.920 + [[ Fedora Linux == FreeBSD ]] 00:05:04.920 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:04.920 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:04.920 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:05:04.920 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:05:04.920 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:05:04.920 + [[ -x /usr/src/fio-static/fio ]] 00:05:04.920 + export FIO_BIN=/usr/src/fio-static/fio 00:05:04.920 + FIO_BIN=/usr/src/fio-static/fio 00:05:04.920 + sudo dmesg -Tw 00:05:04.920 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:05:04.920 + [[ ! -v VFIO_QEMU_BIN ]] 00:05:04.920 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:05:04.920 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:04.920 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:04.920 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:05:04.920 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:04.920 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:04.920 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:04.920 07:00:16 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:05:04.920 07:00:16 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:04.920 07:00:16 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:04.920 07:00:16 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:05:04.920 07:00:16 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:05:04.920 07:00:16 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:04.920 07:00:16 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:05:04.920 07:00:16 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:05:04.920 07:00:16 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:05:04.920 07:00:16 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:05:04.920 07:00:16 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:05:04.920 07:00:16 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:05:04.920 07:00:16 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:04.920 07:00:16 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:05:04.920 07:00:16 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:04.920 07:00:16 -- scripts/common.sh@15 -- $ shopt -s extglob 00:05:04.920 07:00:16 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:05:04.920 07:00:16 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:04.920 07:00:16 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:04.920 07:00:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.920 07:00:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.920 07:00:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.920 07:00:16 -- paths/export.sh@5 -- $ export PATH 00:05:04.920 07:00:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.182 07:00:16 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:05.182 07:00:16 -- common/autobuild_common.sh@493 -- $ date +%s 00:05:05.182 07:00:16 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732687216.XXXXXX 00:05:05.182 07:00:16 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732687216.hVcEyz 00:05:05.182 07:00:16 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:05:05.182 07:00:16 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:05:05.182 07:00:16 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:05:05.182 07:00:16 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:05:05.182 07:00:16 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:05:05.182 07:00:16 -- common/autobuild_common.sh@509 -- $ get_config_params 00:05:05.182 07:00:16 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:05:05.182 07:00:16 -- common/autotest_common.sh@10 -- $ set +x 00:05:05.182 07:00:16 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:05:05.182 07:00:16 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:05:05.182 07:00:16 -- pm/common@17 -- $ local monitor 00:05:05.182 07:00:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:05.182 07:00:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:05.182 07:00:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:05.182 07:00:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:05.182 07:00:16 -- pm/common@21 -- $ date +%s 00:05:05.182 07:00:16 -- pm/common@21 -- $ date +%s 00:05:05.182 07:00:16 -- pm/common@25 -- $ sleep 1 00:05:05.182 07:00:16 -- pm/common@21 -- $ date +%s 00:05:05.182 07:00:16 -- pm/common@21 -- $ date +%s 00:05:05.182 07:00:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732687216 00:05:05.182 07:00:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732687216 00:05:05.182 07:00:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732687216 00:05:05.182 07:00:16 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732687216 00:05:05.182 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732687216_collect-cpu-load.pm.log 00:05:05.182 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732687216_collect-vmstat.pm.log 00:05:05.182 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732687216_collect-cpu-temp.pm.log 00:05:05.182 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732687216_collect-bmc-pm.bmc.pm.log 00:05:06.166 07:00:17 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:05:06.166 07:00:17 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:05:06.166 07:00:17 -- spdk/autobuild.sh@12 -- $ umask 022 00:05:06.166 07:00:17 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:06.166 07:00:17 -- spdk/autobuild.sh@16 -- $ date -u 00:05:06.166 Wed Nov 27 06:00:17 AM UTC 2024 00:05:06.166 07:00:17 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:05:06.166 v25.01-pre-275-g4915847b4 00:05:06.166 07:00:17 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:05:06.166 07:00:17 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:05:06.166 07:00:17 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:05:06.166 07:00:17 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:06.166 07:00:17 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:06.166 07:00:17 -- common/autotest_common.sh@10 -- $ set +x 00:05:06.166 ************************************ 00:05:06.166 START TEST ubsan 00:05:06.166 ************************************ 00:05:06.166 07:00:17 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:05:06.166 using ubsan 00:05:06.166 00:05:06.166 real 0m0.001s 00:05:06.166 user 0m0.001s 00:05:06.166 sys 0m0.000s 00:05:06.166 07:00:17 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:06.166 07:00:17 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:05:06.166 ************************************ 00:05:06.166 END TEST ubsan 00:05:06.166 ************************************ 00:05:06.166 07:00:17 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:05:06.166 07:00:17 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:05:06.166 07:00:17 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:05:06.166 07:00:17 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:05:06.166 07:00:17 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:05:06.166 07:00:17 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:05:06.166 07:00:17 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:05:06.166 07:00:17 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:05:06.166 07:00:17 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:05:06.449 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:06.449 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:06.709 Using 'verbs' RDMA provider 00:05:22.562 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:05:37.474 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:05:37.474 Creating mk/config.mk...done. 00:05:37.474 Creating mk/cc.flags.mk...done. 00:05:37.474 Type 'make' to build. 00:05:37.474 07:00:46 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:05:37.474 07:00:46 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:37.474 07:00:46 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:37.474 07:00:46 -- common/autotest_common.sh@10 -- $ set +x 00:05:37.474 ************************************ 00:05:37.474 START TEST make 00:05:37.474 ************************************ 00:05:37.474 07:00:46 make -- common/autotest_common.sh@1129 -- $ make -j144 00:05:37.474 make[1]: Nothing to be done for 'all'. 00:05:37.474 The Meson build system 00:05:37.474 Version: 1.5.0 00:05:37.474 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:05:37.474 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:37.474 Build type: native build 00:05:37.474 Project name: libvfio-user 00:05:37.474 Project version: 0.0.1 00:05:37.474 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:37.474 C linker for the host machine: cc ld.bfd 2.40-14 00:05:37.474 Host machine cpu family: x86_64 00:05:37.474 Host machine cpu: x86_64 00:05:37.474 Run-time dependency threads found: YES 00:05:37.474 Library dl found: YES 00:05:37.474 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:37.474 Run-time dependency json-c found: YES 0.17 00:05:37.474 Run-time dependency cmocka found: YES 1.1.7 00:05:37.474 Program pytest-3 found: NO 00:05:37.474 Program flake8 found: NO 00:05:37.474 Program misspell-fixer found: NO 00:05:37.474 Program restructuredtext-lint found: NO 00:05:37.474 Program valgrind found: YES (/usr/bin/valgrind) 00:05:37.474 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:37.474 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:37.474 Compiler for C supports arguments -Wwrite-strings: YES 00:05:37.474 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:05:37.474 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:05:37.474 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:05:37.474 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:05:37.474 Build targets in project: 8 00:05:37.474 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:05:37.474 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:05:37.474 00:05:37.474 libvfio-user 0.0.1 00:05:37.474 00:05:37.474 User defined options 00:05:37.474 buildtype : debug 00:05:37.474 default_library: shared 00:05:37.474 libdir : /usr/local/lib 00:05:37.474 00:05:37.474 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:38.048 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:05:38.048 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:05:38.048 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:05:38.048 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:05:38.048 [4/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:05:38.049 [5/37] Compiling C object samples/null.p/null.c.o 00:05:38.049 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:05:38.049 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:05:38.049 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:05:38.049 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:05:38.049 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:05:38.049 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:05:38.049 [12/37] Compiling C object test/unit_tests.p/mocks.c.o 00:05:38.049 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:05:38.049 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:05:38.049 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:05:38.049 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:05:38.049 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:05:38.049 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:05:38.049 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:05:38.049 [20/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:05:38.049 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:05:38.049 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:05:38.049 [23/37] Compiling C object samples/server.p/server.c.o 00:05:38.049 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:05:38.049 [25/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:05:38.049 [26/37] Compiling C object samples/client.p/client.c.o 00:05:38.309 [27/37] Linking target samples/client 00:05:38.309 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:05:38.309 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:05:38.309 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:05:38.309 [31/37] Linking target test/unit_tests 00:05:38.309 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:05:38.570 [33/37] Linking target samples/server 00:05:38.570 [34/37] Linking target samples/lspci 00:05:38.570 [35/37] Linking target samples/null 00:05:38.570 [36/37] Linking target samples/gpio-pci-idio-16 00:05:38.570 [37/37] Linking target samples/shadow_ioeventfd_server 00:05:38.570 INFO: autodetecting backend as ninja 00:05:38.570 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:38.570 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:38.830 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:05:38.830 ninja: no work to do. 00:05:45.424 The Meson build system 00:05:45.424 Version: 1.5.0 00:05:45.424 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:05:45.424 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:05:45.424 Build type: native build 00:05:45.424 Program cat found: YES (/usr/bin/cat) 00:05:45.424 Project name: DPDK 00:05:45.424 Project version: 24.03.0 00:05:45.424 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:45.424 C linker for the host machine: cc ld.bfd 2.40-14 00:05:45.424 Host machine cpu family: x86_64 00:05:45.424 Host machine cpu: x86_64 00:05:45.424 Message: ## Building in Developer Mode ## 00:05:45.424 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:45.424 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:05:45.424 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:45.424 Program python3 found: YES (/usr/bin/python3) 00:05:45.424 Program cat found: YES (/usr/bin/cat) 00:05:45.424 Compiler for C supports arguments -march=native: YES 00:05:45.424 Checking for size of "void *" : 8 00:05:45.424 Checking for size of "void *" : 8 (cached) 00:05:45.424 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:05:45.424 Library m found: YES 00:05:45.424 Library numa found: YES 00:05:45.424 Has header "numaif.h" : YES 00:05:45.424 Library fdt found: NO 00:05:45.424 Library execinfo found: NO 00:05:45.424 Has header "execinfo.h" : YES 00:05:45.424 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:45.424 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:45.424 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:45.424 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:45.424 Run-time dependency openssl found: YES 3.1.1 00:05:45.424 Run-time dependency libpcap found: YES 1.10.4 00:05:45.424 Has header "pcap.h" with dependency libpcap: YES 00:05:45.424 Compiler for C supports arguments -Wcast-qual: YES 00:05:45.424 Compiler for C supports arguments -Wdeprecated: YES 00:05:45.424 Compiler for C supports arguments -Wformat: YES 00:05:45.424 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:45.424 Compiler for C supports arguments -Wformat-security: NO 00:05:45.424 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:45.424 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:45.424 Compiler for C supports arguments -Wnested-externs: YES 00:05:45.424 Compiler for C supports arguments -Wold-style-definition: YES 00:05:45.424 Compiler for C supports arguments -Wpointer-arith: YES 00:05:45.424 Compiler for C supports arguments -Wsign-compare: YES 00:05:45.424 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:45.424 Compiler for C supports arguments -Wundef: YES 00:05:45.424 Compiler for C supports arguments -Wwrite-strings: YES 00:05:45.424 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:45.424 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:45.424 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:45.424 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:45.424 Program objdump found: YES (/usr/bin/objdump) 00:05:45.424 Compiler for C supports arguments -mavx512f: YES 00:05:45.424 Checking if "AVX512 checking" compiles: YES 00:05:45.424 Fetching value of define "__SSE4_2__" : 1 00:05:45.424 Fetching value of define "__AES__" : 1 00:05:45.424 Fetching value of define "__AVX__" : 1 00:05:45.424 Fetching value of define "__AVX2__" : 1 00:05:45.424 Fetching value of define "__AVX512BW__" : 1 00:05:45.424 Fetching value of define "__AVX512CD__" : 1 00:05:45.424 Fetching value of define "__AVX512DQ__" : 1 00:05:45.424 Fetching value of define "__AVX512F__" : 1 00:05:45.424 Fetching value of define "__AVX512VL__" : 1 00:05:45.424 Fetching value of define "__PCLMUL__" : 1 00:05:45.424 Fetching value of define "__RDRND__" : 1 00:05:45.424 Fetching value of define "__RDSEED__" : 1 00:05:45.424 Fetching value of define "__VPCLMULQDQ__" : 1 00:05:45.424 Fetching value of define "__znver1__" : (undefined) 00:05:45.424 Fetching value of define "__znver2__" : (undefined) 00:05:45.424 Fetching value of define "__znver3__" : (undefined) 00:05:45.424 Fetching value of define "__znver4__" : (undefined) 00:05:45.424 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:45.424 Message: lib/log: Defining dependency "log" 00:05:45.424 Message: lib/kvargs: Defining dependency "kvargs" 00:05:45.424 Message: lib/telemetry: Defining dependency "telemetry" 00:05:45.424 Checking for function "getentropy" : NO 00:05:45.424 Message: lib/eal: Defining dependency "eal" 00:05:45.424 Message: lib/ring: Defining dependency "ring" 00:05:45.424 Message: lib/rcu: Defining dependency "rcu" 00:05:45.424 Message: lib/mempool: Defining dependency "mempool" 00:05:45.424 Message: lib/mbuf: Defining dependency "mbuf" 00:05:45.424 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:45.424 Fetching value of define "__AVX512F__" : 1 (cached) 00:05:45.424 Fetching value of define "__AVX512BW__" : 1 (cached) 00:05:45.424 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:05:45.424 Fetching value of define "__AVX512VL__" : 1 (cached) 00:05:45.424 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:05:45.424 Compiler for C supports arguments -mpclmul: YES 00:05:45.424 Compiler for C supports arguments -maes: YES 00:05:45.424 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:45.424 Compiler for C supports arguments -mavx512bw: YES 00:05:45.424 Compiler for C supports arguments -mavx512dq: YES 00:05:45.424 Compiler for C supports arguments -mavx512vl: YES 00:05:45.424 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:45.424 Compiler for C supports arguments -mavx2: YES 00:05:45.424 Compiler for C supports arguments -mavx: YES 00:05:45.424 Message: lib/net: Defining dependency "net" 00:05:45.424 Message: lib/meter: Defining dependency "meter" 00:05:45.424 Message: lib/ethdev: Defining dependency "ethdev" 00:05:45.424 Message: lib/pci: Defining dependency "pci" 00:05:45.424 Message: lib/cmdline: Defining dependency "cmdline" 00:05:45.424 Message: lib/hash: Defining dependency "hash" 00:05:45.424 Message: lib/timer: Defining dependency "timer" 00:05:45.424 Message: lib/compressdev: Defining dependency "compressdev" 00:05:45.424 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:45.424 Message: lib/dmadev: Defining dependency "dmadev" 00:05:45.424 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:45.424 Message: lib/power: Defining dependency "power" 00:05:45.424 Message: lib/reorder: Defining dependency "reorder" 00:05:45.424 Message: lib/security: Defining dependency "security" 00:05:45.424 Has header "linux/userfaultfd.h" : YES 00:05:45.424 Has header "linux/vduse.h" : YES 00:05:45.424 Message: lib/vhost: Defining dependency "vhost" 00:05:45.424 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:45.424 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:45.424 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:45.424 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:45.424 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:45.424 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:45.424 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:45.424 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:45.424 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:45.424 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:45.424 Program doxygen found: YES (/usr/local/bin/doxygen) 00:05:45.424 Configuring doxy-api-html.conf using configuration 00:05:45.424 Configuring doxy-api-man.conf using configuration 00:05:45.424 Program mandb found: YES (/usr/bin/mandb) 00:05:45.424 Program sphinx-build found: NO 00:05:45.424 Configuring rte_build_config.h using configuration 00:05:45.424 Message: 00:05:45.424 ================= 00:05:45.424 Applications Enabled 00:05:45.424 ================= 00:05:45.424 00:05:45.424 apps: 00:05:45.424 00:05:45.424 00:05:45.424 Message: 00:05:45.425 ================= 00:05:45.425 Libraries Enabled 00:05:45.425 ================= 00:05:45.425 00:05:45.425 libs: 00:05:45.425 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:45.425 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:45.425 cryptodev, dmadev, power, reorder, security, vhost, 00:05:45.425 00:05:45.425 Message: 00:05:45.425 =============== 00:05:45.425 Drivers Enabled 00:05:45.425 =============== 00:05:45.425 00:05:45.425 common: 00:05:45.425 00:05:45.425 bus: 00:05:45.425 pci, vdev, 00:05:45.425 mempool: 00:05:45.425 ring, 00:05:45.425 dma: 00:05:45.425 00:05:45.425 net: 00:05:45.425 00:05:45.425 crypto: 00:05:45.425 00:05:45.425 compress: 00:05:45.425 00:05:45.425 vdpa: 00:05:45.425 00:05:45.425 00:05:45.425 Message: 00:05:45.425 ================= 00:05:45.425 Content Skipped 00:05:45.425 ================= 00:05:45.425 00:05:45.425 apps: 00:05:45.425 dumpcap: explicitly disabled via build config 00:05:45.425 graph: explicitly disabled via build config 00:05:45.425 pdump: explicitly disabled via build config 00:05:45.425 proc-info: explicitly disabled via build config 00:05:45.425 test-acl: explicitly disabled via build config 00:05:45.425 test-bbdev: explicitly disabled via build config 00:05:45.425 test-cmdline: explicitly disabled via build config 00:05:45.425 test-compress-perf: explicitly disabled via build config 00:05:45.425 test-crypto-perf: explicitly disabled via build config 00:05:45.425 test-dma-perf: explicitly disabled via build config 00:05:45.425 test-eventdev: explicitly disabled via build config 00:05:45.425 test-fib: explicitly disabled via build config 00:05:45.425 test-flow-perf: explicitly disabled via build config 00:05:45.425 test-gpudev: explicitly disabled via build config 00:05:45.425 test-mldev: explicitly disabled via build config 00:05:45.425 test-pipeline: explicitly disabled via build config 00:05:45.425 test-pmd: explicitly disabled via build config 00:05:45.425 test-regex: explicitly disabled via build config 00:05:45.425 test-sad: explicitly disabled via build config 00:05:45.425 test-security-perf: explicitly disabled via build config 00:05:45.425 00:05:45.425 libs: 00:05:45.425 argparse: explicitly disabled via build config 00:05:45.425 metrics: explicitly disabled via build config 00:05:45.425 acl: explicitly disabled via build config 00:05:45.425 bbdev: explicitly disabled via build config 00:05:45.425 bitratestats: explicitly disabled via build config 00:05:45.425 bpf: explicitly disabled via build config 00:05:45.425 cfgfile: explicitly disabled via build config 00:05:45.425 distributor: explicitly disabled via build config 00:05:45.425 efd: explicitly disabled via build config 00:05:45.425 eventdev: explicitly disabled via build config 00:05:45.425 dispatcher: explicitly disabled via build config 00:05:45.425 gpudev: explicitly disabled via build config 00:05:45.425 gro: explicitly disabled via build config 00:05:45.425 gso: explicitly disabled via build config 00:05:45.425 ip_frag: explicitly disabled via build config 00:05:45.425 jobstats: explicitly disabled via build config 00:05:45.425 latencystats: explicitly disabled via build config 00:05:45.425 lpm: explicitly disabled via build config 00:05:45.425 member: explicitly disabled via build config 00:05:45.425 pcapng: explicitly disabled via build config 00:05:45.425 rawdev: explicitly disabled via build config 00:05:45.425 regexdev: explicitly disabled via build config 00:05:45.425 mldev: explicitly disabled via build config 00:05:45.425 rib: explicitly disabled via build config 00:05:45.425 sched: explicitly disabled via build config 00:05:45.425 stack: explicitly disabled via build config 00:05:45.425 ipsec: explicitly disabled via build config 00:05:45.425 pdcp: explicitly disabled via build config 00:05:45.425 fib: explicitly disabled via build config 00:05:45.425 port: explicitly disabled via build config 00:05:45.425 pdump: explicitly disabled via build config 00:05:45.425 table: explicitly disabled via build config 00:05:45.425 pipeline: explicitly disabled via build config 00:05:45.425 graph: explicitly disabled via build config 00:05:45.425 node: explicitly disabled via build config 00:05:45.425 00:05:45.425 drivers: 00:05:45.425 common/cpt: not in enabled drivers build config 00:05:45.425 common/dpaax: not in enabled drivers build config 00:05:45.425 common/iavf: not in enabled drivers build config 00:05:45.425 common/idpf: not in enabled drivers build config 00:05:45.425 common/ionic: not in enabled drivers build config 00:05:45.425 common/mvep: not in enabled drivers build config 00:05:45.425 common/octeontx: not in enabled drivers build config 00:05:45.425 bus/auxiliary: not in enabled drivers build config 00:05:45.425 bus/cdx: not in enabled drivers build config 00:05:45.425 bus/dpaa: not in enabled drivers build config 00:05:45.425 bus/fslmc: not in enabled drivers build config 00:05:45.425 bus/ifpga: not in enabled drivers build config 00:05:45.425 bus/platform: not in enabled drivers build config 00:05:45.425 bus/uacce: not in enabled drivers build config 00:05:45.425 bus/vmbus: not in enabled drivers build config 00:05:45.425 common/cnxk: not in enabled drivers build config 00:05:45.425 common/mlx5: not in enabled drivers build config 00:05:45.425 common/nfp: not in enabled drivers build config 00:05:45.425 common/nitrox: not in enabled drivers build config 00:05:45.425 common/qat: not in enabled drivers build config 00:05:45.425 common/sfc_efx: not in enabled drivers build config 00:05:45.425 mempool/bucket: not in enabled drivers build config 00:05:45.425 mempool/cnxk: not in enabled drivers build config 00:05:45.425 mempool/dpaa: not in enabled drivers build config 00:05:45.425 mempool/dpaa2: not in enabled drivers build config 00:05:45.425 mempool/octeontx: not in enabled drivers build config 00:05:45.425 mempool/stack: not in enabled drivers build config 00:05:45.425 dma/cnxk: not in enabled drivers build config 00:05:45.425 dma/dpaa: not in enabled drivers build config 00:05:45.425 dma/dpaa2: not in enabled drivers build config 00:05:45.425 dma/hisilicon: not in enabled drivers build config 00:05:45.425 dma/idxd: not in enabled drivers build config 00:05:45.425 dma/ioat: not in enabled drivers build config 00:05:45.425 dma/skeleton: not in enabled drivers build config 00:05:45.425 net/af_packet: not in enabled drivers build config 00:05:45.425 net/af_xdp: not in enabled drivers build config 00:05:45.425 net/ark: not in enabled drivers build config 00:05:45.425 net/atlantic: not in enabled drivers build config 00:05:45.425 net/avp: not in enabled drivers build config 00:05:45.425 net/axgbe: not in enabled drivers build config 00:05:45.425 net/bnx2x: not in enabled drivers build config 00:05:45.425 net/bnxt: not in enabled drivers build config 00:05:45.425 net/bonding: not in enabled drivers build config 00:05:45.425 net/cnxk: not in enabled drivers build config 00:05:45.425 net/cpfl: not in enabled drivers build config 00:05:45.425 net/cxgbe: not in enabled drivers build config 00:05:45.425 net/dpaa: not in enabled drivers build config 00:05:45.425 net/dpaa2: not in enabled drivers build config 00:05:45.425 net/e1000: not in enabled drivers build config 00:05:45.425 net/ena: not in enabled drivers build config 00:05:45.425 net/enetc: not in enabled drivers build config 00:05:45.425 net/enetfec: not in enabled drivers build config 00:05:45.425 net/enic: not in enabled drivers build config 00:05:45.425 net/failsafe: not in enabled drivers build config 00:05:45.425 net/fm10k: not in enabled drivers build config 00:05:45.425 net/gve: not in enabled drivers build config 00:05:45.425 net/hinic: not in enabled drivers build config 00:05:45.425 net/hns3: not in enabled drivers build config 00:05:45.425 net/i40e: not in enabled drivers build config 00:05:45.425 net/iavf: not in enabled drivers build config 00:05:45.425 net/ice: not in enabled drivers build config 00:05:45.425 net/idpf: not in enabled drivers build config 00:05:45.425 net/igc: not in enabled drivers build config 00:05:45.425 net/ionic: not in enabled drivers build config 00:05:45.425 net/ipn3ke: not in enabled drivers build config 00:05:45.425 net/ixgbe: not in enabled drivers build config 00:05:45.425 net/mana: not in enabled drivers build config 00:05:45.425 net/memif: not in enabled drivers build config 00:05:45.425 net/mlx4: not in enabled drivers build config 00:05:45.425 net/mlx5: not in enabled drivers build config 00:05:45.425 net/mvneta: not in enabled drivers build config 00:05:45.425 net/mvpp2: not in enabled drivers build config 00:05:45.425 net/netvsc: not in enabled drivers build config 00:05:45.425 net/nfb: not in enabled drivers build config 00:05:45.425 net/nfp: not in enabled drivers build config 00:05:45.425 net/ngbe: not in enabled drivers build config 00:05:45.425 net/null: not in enabled drivers build config 00:05:45.425 net/octeontx: not in enabled drivers build config 00:05:45.425 net/octeon_ep: not in enabled drivers build config 00:05:45.425 net/pcap: not in enabled drivers build config 00:05:45.425 net/pfe: not in enabled drivers build config 00:05:45.425 net/qede: not in enabled drivers build config 00:05:45.425 net/ring: not in enabled drivers build config 00:05:45.425 net/sfc: not in enabled drivers build config 00:05:45.425 net/softnic: not in enabled drivers build config 00:05:45.425 net/tap: not in enabled drivers build config 00:05:45.425 net/thunderx: not in enabled drivers build config 00:05:45.425 net/txgbe: not in enabled drivers build config 00:05:45.425 net/vdev_netvsc: not in enabled drivers build config 00:05:45.425 net/vhost: not in enabled drivers build config 00:05:45.425 net/virtio: not in enabled drivers build config 00:05:45.425 net/vmxnet3: not in enabled drivers build config 00:05:45.425 raw/*: missing internal dependency, "rawdev" 00:05:45.425 crypto/armv8: not in enabled drivers build config 00:05:45.425 crypto/bcmfs: not in enabled drivers build config 00:05:45.425 crypto/caam_jr: not in enabled drivers build config 00:05:45.425 crypto/ccp: not in enabled drivers build config 00:05:45.425 crypto/cnxk: not in enabled drivers build config 00:05:45.425 crypto/dpaa_sec: not in enabled drivers build config 00:05:45.425 crypto/dpaa2_sec: not in enabled drivers build config 00:05:45.425 crypto/ipsec_mb: not in enabled drivers build config 00:05:45.425 crypto/mlx5: not in enabled drivers build config 00:05:45.425 crypto/mvsam: not in enabled drivers build config 00:05:45.425 crypto/nitrox: not in enabled drivers build config 00:05:45.426 crypto/null: not in enabled drivers build config 00:05:45.426 crypto/octeontx: not in enabled drivers build config 00:05:45.426 crypto/openssl: not in enabled drivers build config 00:05:45.426 crypto/scheduler: not in enabled drivers build config 00:05:45.426 crypto/uadk: not in enabled drivers build config 00:05:45.426 crypto/virtio: not in enabled drivers build config 00:05:45.426 compress/isal: not in enabled drivers build config 00:05:45.426 compress/mlx5: not in enabled drivers build config 00:05:45.426 compress/nitrox: not in enabled drivers build config 00:05:45.426 compress/octeontx: not in enabled drivers build config 00:05:45.426 compress/zlib: not in enabled drivers build config 00:05:45.426 regex/*: missing internal dependency, "regexdev" 00:05:45.426 ml/*: missing internal dependency, "mldev" 00:05:45.426 vdpa/ifc: not in enabled drivers build config 00:05:45.426 vdpa/mlx5: not in enabled drivers build config 00:05:45.426 vdpa/nfp: not in enabled drivers build config 00:05:45.426 vdpa/sfc: not in enabled drivers build config 00:05:45.426 event/*: missing internal dependency, "eventdev" 00:05:45.426 baseband/*: missing internal dependency, "bbdev" 00:05:45.426 gpu/*: missing internal dependency, "gpudev" 00:05:45.426 00:05:45.426 00:05:45.426 Build targets in project: 84 00:05:45.426 00:05:45.426 DPDK 24.03.0 00:05:45.426 00:05:45.426 User defined options 00:05:45.426 buildtype : debug 00:05:45.426 default_library : shared 00:05:45.426 libdir : lib 00:05:45.426 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:45.426 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:45.426 c_link_args : 00:05:45.426 cpu_instruction_set: native 00:05:45.426 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:05:45.426 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:05:45.426 enable_docs : false 00:05:45.426 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:05:45.426 enable_kmods : false 00:05:45.426 max_lcores : 128 00:05:45.426 tests : false 00:05:45.426 00:05:45.426 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:45.426 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:05:45.426 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:45.426 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:45.426 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:45.426 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:45.426 [5/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:45.426 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:45.426 [7/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:45.426 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:45.426 [9/267] Linking static target lib/librte_kvargs.a 00:05:45.426 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:45.426 [11/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:45.426 [12/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:45.426 [13/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:45.426 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:45.426 [15/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:45.426 [16/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:45.426 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:45.426 [18/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:45.426 [19/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:45.426 [20/267] Linking static target lib/librte_log.a 00:05:45.426 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:45.426 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:45.426 [23/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:45.426 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:45.426 [25/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:45.426 [26/267] Linking static target lib/librte_pci.a 00:05:45.426 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:45.426 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:45.426 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:45.426 [30/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:45.426 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:45.426 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:45.684 [33/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:45.684 [34/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:45.684 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:45.684 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:45.684 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:45.684 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:45.684 [39/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:45.684 [40/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:45.684 [41/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:45.946 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:45.946 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:45.946 [44/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:45.946 [45/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:45.946 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:45.946 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:45.946 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:45.946 [49/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:45.946 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:45.946 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:45.946 [52/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:45.946 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:45.946 [54/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:45.946 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:45.946 [56/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:45.946 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:45.946 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:45.946 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:45.946 [60/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:45.946 [61/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:45.946 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:45.946 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:45.946 [64/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:45.946 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:45.946 [66/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:45.946 [67/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:45.946 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:45.946 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:45.946 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:45.946 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:45.946 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:45.946 [73/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:45.946 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:45.946 [75/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:45.946 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:45.946 [77/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:45.946 [78/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:45.946 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:45.946 [80/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:45.946 [81/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:45.946 [82/267] Linking static target lib/librte_meter.a 00:05:45.946 [83/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:45.946 [84/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:45.946 [85/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:45.946 [86/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:45.946 [87/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:45.946 [88/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:45.946 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:45.946 [90/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:45.946 [91/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:45.946 [92/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:45.946 [93/267] Linking static target lib/librte_telemetry.a 00:05:45.946 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:45.946 [95/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:45.946 [96/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:45.946 [97/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:05:45.946 [98/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:45.946 [99/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:45.946 [100/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:45.946 [101/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:45.946 [102/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:45.946 [103/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:45.946 [104/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:45.946 [105/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:45.946 [106/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:45.946 [107/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:45.946 [108/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:45.946 [109/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:45.946 [110/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:45.946 [111/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:45.946 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:45.946 [113/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:45.946 [114/267] Linking static target lib/librte_ring.a 00:05:45.946 [115/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:45.946 [116/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:45.946 [117/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:45.946 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:45.946 [119/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:45.946 [120/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:45.946 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:45.946 [122/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:45.946 [123/267] Linking static target lib/librte_cmdline.a 00:05:45.946 [124/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:45.946 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:45.946 [126/267] Linking static target lib/librte_timer.a 00:05:45.946 [127/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:45.946 [128/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:45.946 [129/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:45.946 [130/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:45.946 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:45.946 [132/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:45.947 [133/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:45.947 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:45.947 [135/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:45.947 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:45.947 [137/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:45.947 [138/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:45.947 [139/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:45.947 [140/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:45.947 [141/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:45.947 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:45.947 [143/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:45.947 [144/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:45.947 [145/267] Linking target lib/librte_log.so.24.1 00:05:45.947 [146/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:45.947 [147/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:45.947 [148/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:46.208 [149/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:46.208 [150/267] Linking static target lib/librte_dmadev.a 00:05:46.208 [151/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:46.208 [152/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:46.208 [153/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:46.208 [154/267] Linking static target lib/librte_power.a 00:05:46.208 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:46.208 [156/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:46.208 [157/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:46.208 [158/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:46.208 [159/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:46.208 [160/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:46.208 [161/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:46.208 [162/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:46.208 [163/267] Linking static target lib/librte_compressdev.a 00:05:46.208 [164/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:46.208 [165/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:46.208 [166/267] Linking static target lib/librte_net.a 00:05:46.208 [167/267] Linking static target lib/librte_mempool.a 00:05:46.208 [168/267] Linking static target lib/librte_rcu.a 00:05:46.208 [169/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:46.208 [170/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:46.208 [171/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:46.208 [172/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:46.208 [173/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:46.208 [174/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:46.208 [175/267] Linking static target lib/librte_reorder.a 00:05:46.208 [176/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:46.208 [177/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:46.208 [178/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.208 [179/267] Linking static target lib/librte_mbuf.a 00:05:46.208 [180/267] Linking static target lib/librte_eal.a 00:05:46.208 [181/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:46.208 [182/267] Linking static target lib/librte_security.a 00:05:46.208 [183/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:46.208 [184/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:05:46.208 [185/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:46.208 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:46.208 [187/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:46.208 [188/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:46.208 [189/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:46.208 [190/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:46.208 [191/267] Linking static target drivers/librte_mempool_ring.a 00:05:46.208 [192/267] Linking static target drivers/librte_bus_vdev.a 00:05:46.208 [193/267] Linking target lib/librte_kvargs.so.24.1 00:05:46.208 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:46.208 [195/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:46.208 [196/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:46.470 [197/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:46.470 [198/267] Linking static target lib/librte_hash.a 00:05:46.470 [199/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.470 [200/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:46.470 [201/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:46.470 [202/267] Linking static target drivers/librte_bus_pci.a 00:05:46.470 [203/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:46.470 [204/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:46.470 [205/267] Linking static target lib/librte_cryptodev.a 00:05:46.470 [206/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:46.470 [207/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.470 [208/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.470 [209/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.470 [210/267] Linking target lib/librte_telemetry.so.24.1 00:05:46.470 [211/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.732 [212/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:46.732 [213/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.732 [214/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.732 [215/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.993 [216/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:46.993 [217/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.993 [218/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.993 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:46.993 [220/267] Linking static target lib/librte_ethdev.a 00:05:46.993 [221/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.993 [222/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.254 [223/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.254 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.514 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.514 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.776 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:47.776 [228/267] Linking static target lib/librte_vhost.a 00:05:48.719 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:50.104 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:56.689 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:58.075 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:58.075 [233/267] Linking target lib/librte_eal.so.24.1 00:05:58.075 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:58.075 [235/267] Linking target lib/librte_meter.so.24.1 00:05:58.075 [236/267] Linking target lib/librte_ring.so.24.1 00:05:58.075 [237/267] Linking target lib/librte_timer.so.24.1 00:05:58.075 [238/267] Linking target lib/librte_pci.so.24.1 00:05:58.075 [239/267] Linking target lib/librte_dmadev.so.24.1 00:05:58.075 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:05:58.075 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:58.075 [242/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:58.075 [243/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:58.075 [244/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:58.334 [245/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:58.334 [246/267] Linking target lib/librte_rcu.so.24.1 00:05:58.334 [247/267] Linking target lib/librte_mempool.so.24.1 00:05:58.334 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:05:58.334 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:58.334 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:58.334 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:05:58.334 [252/267] Linking target lib/librte_mbuf.so.24.1 00:05:58.593 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:58.593 [254/267] Linking target lib/librte_compressdev.so.24.1 00:05:58.593 [255/267] Linking target lib/librte_reorder.so.24.1 00:05:58.593 [256/267] Linking target lib/librte_net.so.24.1 00:05:58.593 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:05:58.593 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:58.854 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:58.854 [260/267] Linking target lib/librte_hash.so.24.1 00:05:58.854 [261/267] Linking target lib/librte_cmdline.so.24.1 00:05:58.854 [262/267] Linking target lib/librte_security.so.24.1 00:05:58.854 [263/267] Linking target lib/librte_ethdev.so.24.1 00:05:58.854 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:58.854 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:59.114 [266/267] Linking target lib/librte_power.so.24.1 00:05:59.114 [267/267] Linking target lib/librte_vhost.so.24.1 00:05:59.114 INFO: autodetecting backend as ninja 00:05:59.114 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:06:02.416 CC lib/log/log.o 00:06:02.416 CC lib/log/log_flags.o 00:06:02.416 CC lib/ut/ut.o 00:06:02.416 CC lib/log/log_deprecated.o 00:06:02.416 CC lib/ut_mock/mock.o 00:06:02.416 LIB libspdk_log.a 00:06:02.416 LIB libspdk_ut.a 00:06:02.416 LIB libspdk_ut_mock.a 00:06:02.416 SO libspdk_ut.so.2.0 00:06:02.416 SO libspdk_log.so.7.1 00:06:02.416 SO libspdk_ut_mock.so.6.0 00:06:02.416 SYMLINK libspdk_ut_mock.so 00:06:02.416 SYMLINK libspdk_ut.so 00:06:02.416 SYMLINK libspdk_log.so 00:06:02.678 CC lib/ioat/ioat.o 00:06:02.678 CC lib/dma/dma.o 00:06:02.678 CC lib/util/base64.o 00:06:02.678 CC lib/util/bit_array.o 00:06:02.678 CXX lib/trace_parser/trace.o 00:06:02.678 CC lib/util/cpuset.o 00:06:02.678 CC lib/util/crc16.o 00:06:02.678 CC lib/util/crc32.o 00:06:02.678 CC lib/util/crc32c.o 00:06:02.678 CC lib/util/crc32_ieee.o 00:06:02.678 CC lib/util/crc64.o 00:06:02.678 CC lib/util/dif.o 00:06:02.678 CC lib/util/fd.o 00:06:02.678 CC lib/util/fd_group.o 00:06:02.678 CC lib/util/file.o 00:06:02.678 CC lib/util/hexlify.o 00:06:02.678 CC lib/util/iov.o 00:06:02.678 CC lib/util/math.o 00:06:02.678 CC lib/util/net.o 00:06:02.678 CC lib/util/pipe.o 00:06:02.678 CC lib/util/strerror_tls.o 00:06:02.678 CC lib/util/string.o 00:06:02.678 CC lib/util/uuid.o 00:06:02.678 CC lib/util/xor.o 00:06:02.678 CC lib/util/zipf.o 00:06:02.678 CC lib/util/md5.o 00:06:02.939 CC lib/vfio_user/host/vfio_user.o 00:06:02.939 CC lib/vfio_user/host/vfio_user_pci.o 00:06:02.939 LIB libspdk_dma.a 00:06:02.939 LIB libspdk_ioat.a 00:06:02.939 SO libspdk_dma.so.5.0 00:06:02.939 SO libspdk_ioat.so.7.0 00:06:02.939 SYMLINK libspdk_dma.so 00:06:02.939 SYMLINK libspdk_ioat.so 00:06:03.201 LIB libspdk_vfio_user.a 00:06:03.201 SO libspdk_vfio_user.so.5.0 00:06:03.201 LIB libspdk_util.a 00:06:03.201 SYMLINK libspdk_vfio_user.so 00:06:03.201 SO libspdk_util.so.10.1 00:06:03.462 SYMLINK libspdk_util.so 00:06:03.462 LIB libspdk_trace_parser.a 00:06:03.462 SO libspdk_trace_parser.so.6.0 00:06:03.724 SYMLINK libspdk_trace_parser.so 00:06:03.724 CC lib/rdma_utils/rdma_utils.o 00:06:03.724 CC lib/vmd/vmd.o 00:06:03.724 CC lib/vmd/led.o 00:06:03.724 CC lib/conf/conf.o 00:06:03.724 CC lib/json/json_parse.o 00:06:03.724 CC lib/json/json_util.o 00:06:03.724 CC lib/env_dpdk/env.o 00:06:03.724 CC lib/json/json_write.o 00:06:03.724 CC lib/idxd/idxd.o 00:06:03.724 CC lib/env_dpdk/memory.o 00:06:03.724 CC lib/idxd/idxd_user.o 00:06:03.724 CC lib/env_dpdk/pci.o 00:06:03.724 CC lib/idxd/idxd_kernel.o 00:06:03.724 CC lib/env_dpdk/init.o 00:06:03.724 CC lib/env_dpdk/threads.o 00:06:03.724 CC lib/env_dpdk/pci_ioat.o 00:06:03.724 CC lib/env_dpdk/pci_virtio.o 00:06:03.724 CC lib/env_dpdk/pci_vmd.o 00:06:03.724 CC lib/env_dpdk/pci_idxd.o 00:06:03.724 CC lib/env_dpdk/pci_event.o 00:06:03.724 CC lib/env_dpdk/sigbus_handler.o 00:06:03.724 CC lib/env_dpdk/pci_dpdk.o 00:06:03.724 CC lib/env_dpdk/pci_dpdk_2207.o 00:06:03.724 CC lib/env_dpdk/pci_dpdk_2211.o 00:06:03.986 LIB libspdk_conf.a 00:06:03.986 LIB libspdk_rdma_utils.a 00:06:03.986 SO libspdk_conf.so.6.0 00:06:04.247 SO libspdk_rdma_utils.so.1.0 00:06:04.247 LIB libspdk_json.a 00:06:04.247 SO libspdk_json.so.6.0 00:06:04.247 SYMLINK libspdk_conf.so 00:06:04.247 SYMLINK libspdk_rdma_utils.so 00:06:04.247 SYMLINK libspdk_json.so 00:06:04.247 LIB libspdk_idxd.a 00:06:04.508 SO libspdk_idxd.so.12.1 00:06:04.508 LIB libspdk_vmd.a 00:06:04.508 SO libspdk_vmd.so.6.0 00:06:04.508 SYMLINK libspdk_idxd.so 00:06:04.508 SYMLINK libspdk_vmd.so 00:06:04.508 CC lib/rdma_provider/common.o 00:06:04.508 CC lib/rdma_provider/rdma_provider_verbs.o 00:06:04.508 CC lib/jsonrpc/jsonrpc_server.o 00:06:04.508 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:06:04.508 CC lib/jsonrpc/jsonrpc_client.o 00:06:04.508 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:06:04.769 LIB libspdk_rdma_provider.a 00:06:04.769 SO libspdk_rdma_provider.so.7.0 00:06:04.769 LIB libspdk_jsonrpc.a 00:06:05.031 SO libspdk_jsonrpc.so.6.0 00:06:05.031 SYMLINK libspdk_rdma_provider.so 00:06:05.031 SYMLINK libspdk_jsonrpc.so 00:06:05.031 LIB libspdk_env_dpdk.a 00:06:05.031 SO libspdk_env_dpdk.so.15.1 00:06:05.292 SYMLINK libspdk_env_dpdk.so 00:06:05.292 CC lib/rpc/rpc.o 00:06:05.554 LIB libspdk_rpc.a 00:06:05.554 SO libspdk_rpc.so.6.0 00:06:05.816 SYMLINK libspdk_rpc.so 00:06:06.077 CC lib/notify/notify.o 00:06:06.077 CC lib/keyring/keyring.o 00:06:06.077 CC lib/notify/notify_rpc.o 00:06:06.077 CC lib/keyring/keyring_rpc.o 00:06:06.077 CC lib/trace/trace.o 00:06:06.077 CC lib/trace/trace_flags.o 00:06:06.077 CC lib/trace/trace_rpc.o 00:06:06.337 LIB libspdk_notify.a 00:06:06.337 SO libspdk_notify.so.6.0 00:06:06.337 LIB libspdk_keyring.a 00:06:06.337 LIB libspdk_trace.a 00:06:06.337 SO libspdk_keyring.so.2.0 00:06:06.337 SO libspdk_trace.so.11.0 00:06:06.337 SYMLINK libspdk_notify.so 00:06:06.337 SYMLINK libspdk_keyring.so 00:06:06.337 SYMLINK libspdk_trace.so 00:06:06.909 CC lib/thread/thread.o 00:06:06.909 CC lib/thread/iobuf.o 00:06:06.909 CC lib/sock/sock.o 00:06:06.909 CC lib/sock/sock_rpc.o 00:06:07.171 LIB libspdk_sock.a 00:06:07.171 SO libspdk_sock.so.10.0 00:06:07.171 SYMLINK libspdk_sock.so 00:06:07.743 CC lib/nvme/nvme_ctrlr_cmd.o 00:06:07.743 CC lib/nvme/nvme_ctrlr.o 00:06:07.743 CC lib/nvme/nvme_fabric.o 00:06:07.743 CC lib/nvme/nvme_ns_cmd.o 00:06:07.743 CC lib/nvme/nvme_ns.o 00:06:07.743 CC lib/nvme/nvme_pcie_common.o 00:06:07.743 CC lib/nvme/nvme_pcie.o 00:06:07.743 CC lib/nvme/nvme_qpair.o 00:06:07.743 CC lib/nvme/nvme.o 00:06:07.743 CC lib/nvme/nvme_quirks.o 00:06:07.743 CC lib/nvme/nvme_transport.o 00:06:07.743 CC lib/nvme/nvme_discovery.o 00:06:07.743 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:06:07.743 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:06:07.743 CC lib/nvme/nvme_tcp.o 00:06:07.743 CC lib/nvme/nvme_opal.o 00:06:07.743 CC lib/nvme/nvme_io_msg.o 00:06:07.743 CC lib/nvme/nvme_poll_group.o 00:06:07.743 CC lib/nvme/nvme_zns.o 00:06:07.743 CC lib/nvme/nvme_stubs.o 00:06:07.743 CC lib/nvme/nvme_auth.o 00:06:07.743 CC lib/nvme/nvme_cuse.o 00:06:07.743 CC lib/nvme/nvme_vfio_user.o 00:06:07.743 CC lib/nvme/nvme_rdma.o 00:06:08.005 LIB libspdk_thread.a 00:06:08.005 SO libspdk_thread.so.11.0 00:06:08.264 SYMLINK libspdk_thread.so 00:06:08.525 CC lib/blob/blobstore.o 00:06:08.525 CC lib/blob/request.o 00:06:08.525 CC lib/blob/zeroes.o 00:06:08.525 CC lib/blob/blob_bs_dev.o 00:06:08.525 CC lib/fsdev/fsdev.o 00:06:08.525 CC lib/fsdev/fsdev_io.o 00:06:08.525 CC lib/init/json_config.o 00:06:08.525 CC lib/fsdev/fsdev_rpc.o 00:06:08.525 CC lib/init/subsystem.o 00:06:08.525 CC lib/init/subsystem_rpc.o 00:06:08.525 CC lib/init/rpc.o 00:06:08.525 CC lib/virtio/virtio.o 00:06:08.525 CC lib/virtio/virtio_vhost_user.o 00:06:08.525 CC lib/vfu_tgt/tgt_endpoint.o 00:06:08.525 CC lib/virtio/virtio_vfio_user.o 00:06:08.525 CC lib/virtio/virtio_pci.o 00:06:08.525 CC lib/vfu_tgt/tgt_rpc.o 00:06:08.525 CC lib/accel/accel.o 00:06:08.525 CC lib/accel/accel_rpc.o 00:06:08.525 CC lib/accel/accel_sw.o 00:06:08.787 LIB libspdk_init.a 00:06:08.787 SO libspdk_init.so.6.0 00:06:09.049 LIB libspdk_virtio.a 00:06:09.049 LIB libspdk_vfu_tgt.a 00:06:09.049 SYMLINK libspdk_init.so 00:06:09.049 SO libspdk_vfu_tgt.so.3.0 00:06:09.049 SO libspdk_virtio.so.7.0 00:06:09.049 SYMLINK libspdk_vfu_tgt.so 00:06:09.049 SYMLINK libspdk_virtio.so 00:06:09.049 LIB libspdk_fsdev.a 00:06:09.365 SO libspdk_fsdev.so.2.0 00:06:09.365 CC lib/event/app.o 00:06:09.365 CC lib/event/reactor.o 00:06:09.365 CC lib/event/log_rpc.o 00:06:09.365 CC lib/event/app_rpc.o 00:06:09.365 CC lib/event/scheduler_static.o 00:06:09.365 SYMLINK libspdk_fsdev.so 00:06:09.682 LIB libspdk_accel.a 00:06:09.682 LIB libspdk_nvme.a 00:06:09.682 SO libspdk_accel.so.16.0 00:06:09.682 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:06:09.682 SYMLINK libspdk_accel.so 00:06:09.682 SO libspdk_nvme.so.15.0 00:06:09.682 LIB libspdk_event.a 00:06:09.977 SO libspdk_event.so.14.0 00:06:09.977 SYMLINK libspdk_event.so 00:06:09.977 SYMLINK libspdk_nvme.so 00:06:09.977 CC lib/bdev/bdev.o 00:06:09.977 CC lib/bdev/bdev_rpc.o 00:06:09.977 CC lib/bdev/bdev_zone.o 00:06:09.977 CC lib/bdev/part.o 00:06:09.977 CC lib/bdev/scsi_nvme.o 00:06:10.239 LIB libspdk_fuse_dispatcher.a 00:06:10.239 SO libspdk_fuse_dispatcher.so.1.0 00:06:10.500 SYMLINK libspdk_fuse_dispatcher.so 00:06:11.072 LIB libspdk_blob.a 00:06:11.333 SO libspdk_blob.so.12.0 00:06:11.333 SYMLINK libspdk_blob.so 00:06:11.594 CC lib/lvol/lvol.o 00:06:11.594 CC lib/blobfs/blobfs.o 00:06:11.594 CC lib/blobfs/tree.o 00:06:12.536 LIB libspdk_blobfs.a 00:06:12.536 LIB libspdk_bdev.a 00:06:12.536 SO libspdk_blobfs.so.11.0 00:06:12.536 SO libspdk_bdev.so.17.0 00:06:12.536 LIB libspdk_lvol.a 00:06:12.536 SYMLINK libspdk_blobfs.so 00:06:12.536 SO libspdk_lvol.so.11.0 00:06:12.536 SYMLINK libspdk_bdev.so 00:06:12.536 SYMLINK libspdk_lvol.so 00:06:13.106 CC lib/ftl/ftl_core.o 00:06:13.106 CC lib/ftl/ftl_init.o 00:06:13.106 CC lib/ftl/ftl_layout.o 00:06:13.106 CC lib/ftl/ftl_debug.o 00:06:13.106 CC lib/ftl/ftl_io.o 00:06:13.106 CC lib/ftl/ftl_sb.o 00:06:13.106 CC lib/scsi/dev.o 00:06:13.106 CC lib/ftl/ftl_l2p.o 00:06:13.106 CC lib/nvmf/ctrlr.o 00:06:13.106 CC lib/ftl/ftl_l2p_flat.o 00:06:13.106 CC lib/scsi/lun.o 00:06:13.106 CC lib/nbd/nbd.o 00:06:13.106 CC lib/ublk/ublk.o 00:06:13.106 CC lib/ftl/ftl_nv_cache.o 00:06:13.106 CC lib/nvmf/ctrlr_discovery.o 00:06:13.106 CC lib/nbd/nbd_rpc.o 00:06:13.106 CC lib/scsi/port.o 00:06:13.106 CC lib/ftl/ftl_band.o 00:06:13.106 CC lib/nvmf/ctrlr_bdev.o 00:06:13.106 CC lib/ublk/ublk_rpc.o 00:06:13.106 CC lib/scsi/scsi.o 00:06:13.106 CC lib/scsi/scsi_bdev.o 00:06:13.106 CC lib/ftl/ftl_band_ops.o 00:06:13.106 CC lib/nvmf/subsystem.o 00:06:13.106 CC lib/scsi/scsi_pr.o 00:06:13.106 CC lib/ftl/ftl_writer.o 00:06:13.106 CC lib/nvmf/nvmf.o 00:06:13.106 CC lib/scsi/scsi_rpc.o 00:06:13.106 CC lib/nvmf/nvmf_rpc.o 00:06:13.106 CC lib/ftl/ftl_rq.o 00:06:13.106 CC lib/scsi/task.o 00:06:13.106 CC lib/nvmf/transport.o 00:06:13.106 CC lib/ftl/ftl_reloc.o 00:06:13.106 CC lib/nvmf/tcp.o 00:06:13.106 CC lib/ftl/ftl_l2p_cache.o 00:06:13.106 CC lib/nvmf/stubs.o 00:06:13.106 CC lib/ftl/ftl_p2l.o 00:06:13.106 CC lib/nvmf/mdns_server.o 00:06:13.106 CC lib/nvmf/vfio_user.o 00:06:13.106 CC lib/ftl/ftl_p2l_log.o 00:06:13.106 CC lib/nvmf/rdma.o 00:06:13.106 CC lib/ftl/mngt/ftl_mngt.o 00:06:13.106 CC lib/nvmf/auth.o 00:06:13.106 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:06:13.106 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:06:13.106 CC lib/ftl/mngt/ftl_mngt_startup.o 00:06:13.106 CC lib/ftl/mngt/ftl_mngt_md.o 00:06:13.106 CC lib/ftl/mngt/ftl_mngt_misc.o 00:06:13.106 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:06:13.106 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:06:13.106 CC lib/ftl/mngt/ftl_mngt_band.o 00:06:13.106 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:06:13.106 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:06:13.106 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:06:13.106 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:06:13.106 CC lib/ftl/utils/ftl_conf.o 00:06:13.106 CC lib/ftl/utils/ftl_md.o 00:06:13.106 CC lib/ftl/utils/ftl_mempool.o 00:06:13.106 CC lib/ftl/utils/ftl_property.o 00:06:13.106 CC lib/ftl/utils/ftl_bitmap.o 00:06:13.106 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:06:13.106 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:06:13.106 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:06:13.106 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:06:13.106 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:06:13.106 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:06:13.106 CC lib/ftl/upgrade/ftl_sb_v3.o 00:06:13.106 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:06:13.106 CC lib/ftl/upgrade/ftl_sb_v5.o 00:06:13.106 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:06:13.106 CC lib/ftl/nvc/ftl_nvc_dev.o 00:06:13.106 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:06:13.106 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:06:13.106 CC lib/ftl/base/ftl_base_dev.o 00:06:13.106 CC lib/ftl/base/ftl_base_bdev.o 00:06:13.106 CC lib/ftl/ftl_trace.o 00:06:13.365 LIB libspdk_nbd.a 00:06:13.625 SO libspdk_nbd.so.7.0 00:06:13.625 LIB libspdk_scsi.a 00:06:13.625 SYMLINK libspdk_nbd.so 00:06:13.625 SO libspdk_scsi.so.9.0 00:06:13.625 LIB libspdk_ublk.a 00:06:13.625 SO libspdk_ublk.so.3.0 00:06:13.625 SYMLINK libspdk_scsi.so 00:06:13.886 SYMLINK libspdk_ublk.so 00:06:13.886 LIB libspdk_ftl.a 00:06:14.146 CC lib/iscsi/conn.o 00:06:14.146 CC lib/iscsi/init_grp.o 00:06:14.146 CC lib/iscsi/iscsi.o 00:06:14.146 CC lib/iscsi/param.o 00:06:14.146 CC lib/iscsi/portal_grp.o 00:06:14.146 CC lib/vhost/vhost.o 00:06:14.146 CC lib/iscsi/tgt_node.o 00:06:14.146 CC lib/vhost/vhost_rpc.o 00:06:14.146 CC lib/iscsi/iscsi_subsystem.o 00:06:14.146 CC lib/vhost/vhost_scsi.o 00:06:14.146 CC lib/iscsi/iscsi_rpc.o 00:06:14.146 CC lib/vhost/vhost_blk.o 00:06:14.146 CC lib/iscsi/task.o 00:06:14.146 CC lib/vhost/rte_vhost_user.o 00:06:14.146 SO libspdk_ftl.so.9.0 00:06:14.406 SYMLINK libspdk_ftl.so 00:06:14.978 LIB libspdk_nvmf.a 00:06:14.978 SO libspdk_nvmf.so.20.0 00:06:14.978 LIB libspdk_vhost.a 00:06:15.240 SO libspdk_vhost.so.8.0 00:06:15.240 SYMLINK libspdk_nvmf.so 00:06:15.240 SYMLINK libspdk_vhost.so 00:06:15.240 LIB libspdk_iscsi.a 00:06:15.502 SO libspdk_iscsi.so.8.0 00:06:15.502 SYMLINK libspdk_iscsi.so 00:06:16.077 CC module/env_dpdk/env_dpdk_rpc.o 00:06:16.077 CC module/vfu_device/vfu_virtio.o 00:06:16.077 CC module/vfu_device/vfu_virtio_blk.o 00:06:16.077 CC module/vfu_device/vfu_virtio_scsi.o 00:06:16.077 CC module/vfu_device/vfu_virtio_rpc.o 00:06:16.077 CC module/vfu_device/vfu_virtio_fs.o 00:06:16.339 LIB libspdk_env_dpdk_rpc.a 00:06:16.339 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:06:16.339 CC module/accel/error/accel_error.o 00:06:16.339 CC module/accel/error/accel_error_rpc.o 00:06:16.339 CC module/scheduler/dynamic/scheduler_dynamic.o 00:06:16.339 CC module/keyring/linux/keyring.o 00:06:16.339 CC module/accel/ioat/accel_ioat.o 00:06:16.339 CC module/sock/posix/posix.o 00:06:16.339 CC module/keyring/linux/keyring_rpc.o 00:06:16.339 CC module/accel/ioat/accel_ioat_rpc.o 00:06:16.339 CC module/keyring/file/keyring.o 00:06:16.339 CC module/keyring/file/keyring_rpc.o 00:06:16.339 CC module/blob/bdev/blob_bdev.o 00:06:16.339 CC module/accel/dsa/accel_dsa.o 00:06:16.339 CC module/accel/iaa/accel_iaa.o 00:06:16.339 CC module/accel/dsa/accel_dsa_rpc.o 00:06:16.339 CC module/accel/iaa/accel_iaa_rpc.o 00:06:16.339 CC module/scheduler/gscheduler/gscheduler.o 00:06:16.339 CC module/fsdev/aio/fsdev_aio.o 00:06:16.339 CC module/fsdev/aio/linux_aio_mgr.o 00:06:16.339 CC module/fsdev/aio/fsdev_aio_rpc.o 00:06:16.339 SO libspdk_env_dpdk_rpc.so.6.0 00:06:16.339 SYMLINK libspdk_env_dpdk_rpc.so 00:06:16.600 LIB libspdk_scheduler_dpdk_governor.a 00:06:16.600 LIB libspdk_keyring_file.a 00:06:16.600 LIB libspdk_keyring_linux.a 00:06:16.600 LIB libspdk_scheduler_gscheduler.a 00:06:16.600 SO libspdk_scheduler_dpdk_governor.so.4.0 00:06:16.600 LIB libspdk_scheduler_dynamic.a 00:06:16.601 SO libspdk_keyring_linux.so.1.0 00:06:16.601 SO libspdk_keyring_file.so.2.0 00:06:16.601 LIB libspdk_accel_error.a 00:06:16.601 SO libspdk_scheduler_gscheduler.so.4.0 00:06:16.601 LIB libspdk_accel_ioat.a 00:06:16.601 LIB libspdk_accel_iaa.a 00:06:16.601 SO libspdk_scheduler_dynamic.so.4.0 00:06:16.601 SO libspdk_accel_ioat.so.6.0 00:06:16.601 SO libspdk_accel_error.so.2.0 00:06:16.601 SYMLINK libspdk_scheduler_dpdk_governor.so 00:06:16.601 SO libspdk_accel_iaa.so.3.0 00:06:16.601 SYMLINK libspdk_keyring_linux.so 00:06:16.601 SYMLINK libspdk_scheduler_gscheduler.so 00:06:16.601 SYMLINK libspdk_keyring_file.so 00:06:16.601 LIB libspdk_blob_bdev.a 00:06:16.601 LIB libspdk_accel_dsa.a 00:06:16.601 SYMLINK libspdk_scheduler_dynamic.so 00:06:16.601 SO libspdk_blob_bdev.so.12.0 00:06:16.601 SYMLINK libspdk_accel_ioat.so 00:06:16.601 SYMLINK libspdk_accel_error.so 00:06:16.601 SO libspdk_accel_dsa.so.5.0 00:06:16.601 SYMLINK libspdk_accel_iaa.so 00:06:16.601 LIB libspdk_vfu_device.a 00:06:16.862 SYMLINK libspdk_blob_bdev.so 00:06:16.862 SYMLINK libspdk_accel_dsa.so 00:06:16.862 SO libspdk_vfu_device.so.3.0 00:06:16.862 SYMLINK libspdk_vfu_device.so 00:06:16.862 LIB libspdk_fsdev_aio.a 00:06:17.123 SO libspdk_fsdev_aio.so.1.0 00:06:17.123 LIB libspdk_sock_posix.a 00:06:17.123 SO libspdk_sock_posix.so.6.0 00:06:17.123 SYMLINK libspdk_fsdev_aio.so 00:06:17.123 SYMLINK libspdk_sock_posix.so 00:06:17.384 CC module/bdev/delay/vbdev_delay_rpc.o 00:06:17.384 CC module/bdev/delay/vbdev_delay.o 00:06:17.384 CC module/bdev/lvol/vbdev_lvol.o 00:06:17.384 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:06:17.384 CC module/bdev/error/vbdev_error.o 00:06:17.384 CC module/bdev/gpt/gpt.o 00:06:17.384 CC module/bdev/gpt/vbdev_gpt.o 00:06:17.384 CC module/blobfs/bdev/blobfs_bdev.o 00:06:17.384 CC module/bdev/aio/bdev_aio.o 00:06:17.384 CC module/bdev/error/vbdev_error_rpc.o 00:06:17.384 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:06:17.384 CC module/bdev/aio/bdev_aio_rpc.o 00:06:17.384 CC module/bdev/null/bdev_null.o 00:06:17.384 CC module/bdev/split/vbdev_split.o 00:06:17.384 CC module/bdev/null/bdev_null_rpc.o 00:06:17.384 CC module/bdev/split/vbdev_split_rpc.o 00:06:17.384 CC module/bdev/raid/bdev_raid.o 00:06:17.384 CC module/bdev/nvme/bdev_nvme.o 00:06:17.384 CC module/bdev/raid/bdev_raid_rpc.o 00:06:17.384 CC module/bdev/nvme/bdev_nvme_rpc.o 00:06:17.384 CC module/bdev/malloc/bdev_malloc.o 00:06:17.384 CC module/bdev/raid/bdev_raid_sb.o 00:06:17.384 CC module/bdev/nvme/nvme_rpc.o 00:06:17.384 CC module/bdev/malloc/bdev_malloc_rpc.o 00:06:17.384 CC module/bdev/raid/raid0.o 00:06:17.384 CC module/bdev/iscsi/bdev_iscsi.o 00:06:17.384 CC module/bdev/nvme/bdev_mdns_client.o 00:06:17.384 CC module/bdev/raid/raid1.o 00:06:17.384 CC module/bdev/passthru/vbdev_passthru.o 00:06:17.384 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:06:17.384 CC module/bdev/nvme/vbdev_opal.o 00:06:17.384 CC module/bdev/raid/concat.o 00:06:17.384 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:06:17.384 CC module/bdev/nvme/vbdev_opal_rpc.o 00:06:17.384 CC module/bdev/zone_block/vbdev_zone_block.o 00:06:17.384 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:06:17.384 CC module/bdev/ftl/bdev_ftl.o 00:06:17.384 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:06:17.384 CC module/bdev/ftl/bdev_ftl_rpc.o 00:06:17.384 CC module/bdev/virtio/bdev_virtio_scsi.o 00:06:17.384 CC module/bdev/virtio/bdev_virtio_blk.o 00:06:17.384 CC module/bdev/virtio/bdev_virtio_rpc.o 00:06:17.646 LIB libspdk_blobfs_bdev.a 00:06:17.646 SO libspdk_blobfs_bdev.so.6.0 00:06:17.646 LIB libspdk_bdev_split.a 00:06:17.646 LIB libspdk_bdev_gpt.a 00:06:17.646 LIB libspdk_bdev_error.a 00:06:17.646 LIB libspdk_bdev_null.a 00:06:17.646 SO libspdk_bdev_split.so.6.0 00:06:17.646 SYMLINK libspdk_blobfs_bdev.so 00:06:17.646 SO libspdk_bdev_gpt.so.6.0 00:06:17.646 SO libspdk_bdev_error.so.6.0 00:06:17.646 SO libspdk_bdev_null.so.6.0 00:06:17.646 LIB libspdk_bdev_ftl.a 00:06:17.646 LIB libspdk_bdev_delay.a 00:06:17.646 LIB libspdk_bdev_passthru.a 00:06:17.646 SO libspdk_bdev_ftl.so.6.0 00:06:17.646 SYMLINK libspdk_bdev_split.so 00:06:17.646 SO libspdk_bdev_passthru.so.6.0 00:06:17.907 SO libspdk_bdev_delay.so.6.0 00:06:17.907 LIB libspdk_bdev_aio.a 00:06:17.907 LIB libspdk_bdev_zone_block.a 00:06:17.907 SYMLINK libspdk_bdev_error.so 00:06:17.907 SYMLINK libspdk_bdev_gpt.so 00:06:17.907 SYMLINK libspdk_bdev_null.so 00:06:17.907 LIB libspdk_bdev_malloc.a 00:06:17.907 LIB libspdk_bdev_iscsi.a 00:06:17.907 SO libspdk_bdev_zone_block.so.6.0 00:06:17.907 SO libspdk_bdev_aio.so.6.0 00:06:17.907 SYMLINK libspdk_bdev_ftl.so 00:06:17.907 SO libspdk_bdev_malloc.so.6.0 00:06:17.907 SYMLINK libspdk_bdev_delay.so 00:06:17.907 SYMLINK libspdk_bdev_passthru.so 00:06:17.907 SO libspdk_bdev_iscsi.so.6.0 00:06:17.907 LIB libspdk_bdev_lvol.a 00:06:17.907 SYMLINK libspdk_bdev_zone_block.so 00:06:17.907 SYMLINK libspdk_bdev_aio.so 00:06:17.907 SYMLINK libspdk_bdev_malloc.so 00:06:17.907 SO libspdk_bdev_lvol.so.6.0 00:06:17.907 SYMLINK libspdk_bdev_iscsi.so 00:06:17.907 LIB libspdk_bdev_virtio.a 00:06:17.907 SO libspdk_bdev_virtio.so.6.0 00:06:17.907 SYMLINK libspdk_bdev_lvol.so 00:06:18.167 SYMLINK libspdk_bdev_virtio.so 00:06:18.428 LIB libspdk_bdev_raid.a 00:06:18.428 SO libspdk_bdev_raid.so.6.0 00:06:18.428 SYMLINK libspdk_bdev_raid.so 00:06:19.813 LIB libspdk_bdev_nvme.a 00:06:19.813 SO libspdk_bdev_nvme.so.7.1 00:06:19.813 SYMLINK libspdk_bdev_nvme.so 00:06:20.758 CC module/event/subsystems/iobuf/iobuf.o 00:06:20.758 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:06:20.758 CC module/event/subsystems/sock/sock.o 00:06:20.758 CC module/event/subsystems/vmd/vmd.o 00:06:20.758 CC module/event/subsystems/vmd/vmd_rpc.o 00:06:20.758 CC module/event/subsystems/scheduler/scheduler.o 00:06:20.758 CC module/event/subsystems/keyring/keyring.o 00:06:20.758 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:06:20.758 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:06:20.758 CC module/event/subsystems/fsdev/fsdev.o 00:06:20.758 LIB libspdk_event_vfu_tgt.a 00:06:20.758 LIB libspdk_event_keyring.a 00:06:20.758 LIB libspdk_event_sock.a 00:06:20.758 LIB libspdk_event_vmd.a 00:06:20.758 LIB libspdk_event_vhost_blk.a 00:06:20.758 LIB libspdk_event_fsdev.a 00:06:20.758 LIB libspdk_event_iobuf.a 00:06:20.758 LIB libspdk_event_scheduler.a 00:06:20.758 SO libspdk_event_vfu_tgt.so.3.0 00:06:20.758 SO libspdk_event_sock.so.5.0 00:06:20.758 SO libspdk_event_keyring.so.1.0 00:06:20.758 SO libspdk_event_fsdev.so.1.0 00:06:20.758 SO libspdk_event_vmd.so.6.0 00:06:20.758 SO libspdk_event_vhost_blk.so.3.0 00:06:20.758 SO libspdk_event_scheduler.so.4.0 00:06:20.758 SO libspdk_event_iobuf.so.3.0 00:06:20.758 SYMLINK libspdk_event_sock.so 00:06:20.758 SYMLINK libspdk_event_vfu_tgt.so 00:06:20.758 SYMLINK libspdk_event_keyring.so 00:06:20.758 SYMLINK libspdk_event_fsdev.so 00:06:20.758 SYMLINK libspdk_event_scheduler.so 00:06:20.758 SYMLINK libspdk_event_vhost_blk.so 00:06:20.758 SYMLINK libspdk_event_vmd.so 00:06:20.758 SYMLINK libspdk_event_iobuf.so 00:06:21.330 CC module/event/subsystems/accel/accel.o 00:06:21.330 LIB libspdk_event_accel.a 00:06:21.330 SO libspdk_event_accel.so.6.0 00:06:21.592 SYMLINK libspdk_event_accel.so 00:06:21.869 CC module/event/subsystems/bdev/bdev.o 00:06:22.131 LIB libspdk_event_bdev.a 00:06:22.131 SO libspdk_event_bdev.so.6.0 00:06:22.131 SYMLINK libspdk_event_bdev.so 00:06:22.392 CC module/event/subsystems/scsi/scsi.o 00:06:22.392 CC module/event/subsystems/ublk/ublk.o 00:06:22.392 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:22.392 CC module/event/subsystems/nbd/nbd.o 00:06:22.392 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:22.654 LIB libspdk_event_ublk.a 00:06:22.654 LIB libspdk_event_nbd.a 00:06:22.654 LIB libspdk_event_scsi.a 00:06:22.654 SO libspdk_event_nbd.so.6.0 00:06:22.654 SO libspdk_event_ublk.so.3.0 00:06:22.654 SO libspdk_event_scsi.so.6.0 00:06:22.654 LIB libspdk_event_nvmf.a 00:06:22.654 SYMLINK libspdk_event_nbd.so 00:06:22.654 SYMLINK libspdk_event_ublk.so 00:06:22.654 SYMLINK libspdk_event_scsi.so 00:06:22.654 SO libspdk_event_nvmf.so.6.0 00:06:22.915 SYMLINK libspdk_event_nvmf.so 00:06:23.176 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:23.176 CC module/event/subsystems/iscsi/iscsi.o 00:06:23.176 LIB libspdk_event_vhost_scsi.a 00:06:23.436 LIB libspdk_event_iscsi.a 00:06:23.436 SO libspdk_event_vhost_scsi.so.3.0 00:06:23.436 SO libspdk_event_iscsi.so.6.0 00:06:23.436 SYMLINK libspdk_event_vhost_scsi.so 00:06:23.436 SYMLINK libspdk_event_iscsi.so 00:06:23.698 SO libspdk.so.6.0 00:06:23.698 SYMLINK libspdk.so 00:06:23.959 CXX app/trace/trace.o 00:06:23.959 CC app/trace_record/trace_record.o 00:06:23.959 CC app/spdk_nvme_identify/identify.o 00:06:23.959 CC app/spdk_lspci/spdk_lspci.o 00:06:23.959 TEST_HEADER include/spdk/accel.h 00:06:23.959 TEST_HEADER include/spdk/accel_module.h 00:06:23.959 TEST_HEADER include/spdk/assert.h 00:06:23.959 TEST_HEADER include/spdk/barrier.h 00:06:23.959 CC app/spdk_nvme_perf/perf.o 00:06:23.959 TEST_HEADER include/spdk/base64.h 00:06:23.959 CC app/spdk_top/spdk_top.o 00:06:23.959 TEST_HEADER include/spdk/bdev.h 00:06:23.959 TEST_HEADER include/spdk/bdev_module.h 00:06:23.959 TEST_HEADER include/spdk/bdev_zone.h 00:06:23.959 CC app/spdk_nvme_discover/discovery_aer.o 00:06:23.959 TEST_HEADER include/spdk/bit_array.h 00:06:23.959 TEST_HEADER include/spdk/bit_pool.h 00:06:23.959 TEST_HEADER include/spdk/blob_bdev.h 00:06:23.959 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:23.959 TEST_HEADER include/spdk/blob.h 00:06:23.959 TEST_HEADER include/spdk/blobfs.h 00:06:23.959 CC test/rpc_client/rpc_client_test.o 00:06:23.959 TEST_HEADER include/spdk/conf.h 00:06:23.959 TEST_HEADER include/spdk/config.h 00:06:23.959 TEST_HEADER include/spdk/cpuset.h 00:06:23.959 TEST_HEADER include/spdk/crc16.h 00:06:23.959 TEST_HEADER include/spdk/crc32.h 00:06:23.959 TEST_HEADER include/spdk/crc64.h 00:06:24.222 TEST_HEADER include/spdk/dif.h 00:06:24.222 TEST_HEADER include/spdk/dma.h 00:06:24.222 TEST_HEADER include/spdk/endian.h 00:06:24.222 TEST_HEADER include/spdk/env_dpdk.h 00:06:24.222 TEST_HEADER include/spdk/env.h 00:06:24.222 TEST_HEADER include/spdk/event.h 00:06:24.222 TEST_HEADER include/spdk/fd_group.h 00:06:24.222 TEST_HEADER include/spdk/fd.h 00:06:24.222 TEST_HEADER include/spdk/fsdev.h 00:06:24.222 TEST_HEADER include/spdk/file.h 00:06:24.222 TEST_HEADER include/spdk/fsdev_module.h 00:06:24.222 TEST_HEADER include/spdk/ftl.h 00:06:24.222 TEST_HEADER include/spdk/gpt_spec.h 00:06:24.222 TEST_HEADER include/spdk/fuse_dispatcher.h 00:06:24.222 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:24.222 TEST_HEADER include/spdk/hexlify.h 00:06:24.222 TEST_HEADER include/spdk/histogram_data.h 00:06:24.222 CC app/iscsi_tgt/iscsi_tgt.o 00:06:24.222 TEST_HEADER include/spdk/idxd.h 00:06:24.222 TEST_HEADER include/spdk/idxd_spec.h 00:06:24.222 TEST_HEADER include/spdk/init.h 00:06:24.223 TEST_HEADER include/spdk/ioat.h 00:06:24.223 TEST_HEADER include/spdk/ioat_spec.h 00:06:24.223 CC app/nvmf_tgt/nvmf_main.o 00:06:24.223 TEST_HEADER include/spdk/iscsi_spec.h 00:06:24.223 TEST_HEADER include/spdk/json.h 00:06:24.223 TEST_HEADER include/spdk/jsonrpc.h 00:06:24.223 TEST_HEADER include/spdk/keyring.h 00:06:24.223 CC app/spdk_dd/spdk_dd.o 00:06:24.223 TEST_HEADER include/spdk/keyring_module.h 00:06:24.223 TEST_HEADER include/spdk/likely.h 00:06:24.223 TEST_HEADER include/spdk/log.h 00:06:24.223 TEST_HEADER include/spdk/lvol.h 00:06:24.223 TEST_HEADER include/spdk/md5.h 00:06:24.223 TEST_HEADER include/spdk/memory.h 00:06:24.223 TEST_HEADER include/spdk/mmio.h 00:06:24.223 TEST_HEADER include/spdk/nbd.h 00:06:24.223 TEST_HEADER include/spdk/net.h 00:06:24.223 CC app/spdk_tgt/spdk_tgt.o 00:06:24.223 TEST_HEADER include/spdk/nvme.h 00:06:24.223 TEST_HEADER include/spdk/notify.h 00:06:24.223 TEST_HEADER include/spdk/nvme_intel.h 00:06:24.223 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:24.223 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:24.223 TEST_HEADER include/spdk/nvme_spec.h 00:06:24.223 TEST_HEADER include/spdk/nvme_zns.h 00:06:24.223 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:24.223 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:24.223 TEST_HEADER include/spdk/nvmf.h 00:06:24.223 TEST_HEADER include/spdk/nvmf_spec.h 00:06:24.223 TEST_HEADER include/spdk/nvmf_transport.h 00:06:24.223 TEST_HEADER include/spdk/opal.h 00:06:24.223 TEST_HEADER include/spdk/opal_spec.h 00:06:24.223 TEST_HEADER include/spdk/pci_ids.h 00:06:24.223 TEST_HEADER include/spdk/pipe.h 00:06:24.223 TEST_HEADER include/spdk/reduce.h 00:06:24.223 TEST_HEADER include/spdk/queue.h 00:06:24.223 TEST_HEADER include/spdk/rpc.h 00:06:24.223 TEST_HEADER include/spdk/scheduler.h 00:06:24.223 TEST_HEADER include/spdk/scsi.h 00:06:24.223 TEST_HEADER include/spdk/scsi_spec.h 00:06:24.223 TEST_HEADER include/spdk/stdinc.h 00:06:24.223 TEST_HEADER include/spdk/sock.h 00:06:24.223 TEST_HEADER include/spdk/string.h 00:06:24.223 TEST_HEADER include/spdk/thread.h 00:06:24.223 TEST_HEADER include/spdk/trace.h 00:06:24.223 TEST_HEADER include/spdk/tree.h 00:06:24.223 TEST_HEADER include/spdk/trace_parser.h 00:06:24.223 TEST_HEADER include/spdk/util.h 00:06:24.223 TEST_HEADER include/spdk/ublk.h 00:06:24.223 TEST_HEADER include/spdk/uuid.h 00:06:24.223 TEST_HEADER include/spdk/version.h 00:06:24.223 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:24.223 TEST_HEADER include/spdk/vhost.h 00:06:24.223 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:24.223 TEST_HEADER include/spdk/vmd.h 00:06:24.223 TEST_HEADER include/spdk/zipf.h 00:06:24.223 TEST_HEADER include/spdk/xor.h 00:06:24.223 CXX test/cpp_headers/accel_module.o 00:06:24.223 CXX test/cpp_headers/accel.o 00:06:24.223 CXX test/cpp_headers/assert.o 00:06:24.223 CXX test/cpp_headers/base64.o 00:06:24.223 CXX test/cpp_headers/barrier.o 00:06:24.223 CXX test/cpp_headers/bdev.o 00:06:24.223 CXX test/cpp_headers/bdev_zone.o 00:06:24.223 CXX test/cpp_headers/bit_array.o 00:06:24.223 CXX test/cpp_headers/bdev_module.o 00:06:24.223 CXX test/cpp_headers/blob_bdev.o 00:06:24.223 CXX test/cpp_headers/bit_pool.o 00:06:24.223 CXX test/cpp_headers/blobfs_bdev.o 00:06:24.223 CXX test/cpp_headers/blobfs.o 00:06:24.223 CXX test/cpp_headers/conf.o 00:06:24.223 CXX test/cpp_headers/blob.o 00:06:24.223 CXX test/cpp_headers/config.o 00:06:24.223 CXX test/cpp_headers/cpuset.o 00:06:24.223 CXX test/cpp_headers/crc16.o 00:06:24.223 CXX test/cpp_headers/crc32.o 00:06:24.223 CXX test/cpp_headers/dif.o 00:06:24.223 CXX test/cpp_headers/crc64.o 00:06:24.223 CXX test/cpp_headers/dma.o 00:06:24.223 CXX test/cpp_headers/endian.o 00:06:24.223 CXX test/cpp_headers/env_dpdk.o 00:06:24.223 CXX test/cpp_headers/env.o 00:06:24.223 CXX test/cpp_headers/event.o 00:06:24.223 CXX test/cpp_headers/fd_group.o 00:06:24.223 CXX test/cpp_headers/fsdev.o 00:06:24.223 CXX test/cpp_headers/fd.o 00:06:24.223 CXX test/cpp_headers/file.o 00:06:24.223 CXX test/cpp_headers/fsdev_module.o 00:06:24.223 CXX test/cpp_headers/ftl.o 00:06:24.223 CXX test/cpp_headers/gpt_spec.o 00:06:24.223 CXX test/cpp_headers/hexlify.o 00:06:24.223 CXX test/cpp_headers/fuse_dispatcher.o 00:06:24.223 CXX test/cpp_headers/histogram_data.o 00:06:24.223 CXX test/cpp_headers/idxd.o 00:06:24.223 CXX test/cpp_headers/init.o 00:06:24.223 CXX test/cpp_headers/idxd_spec.o 00:06:24.223 CXX test/cpp_headers/ioat.o 00:06:24.223 CXX test/cpp_headers/ioat_spec.o 00:06:24.223 CXX test/cpp_headers/iscsi_spec.o 00:06:24.223 CXX test/cpp_headers/json.o 00:06:24.223 CXX test/cpp_headers/jsonrpc.o 00:06:24.223 CXX test/cpp_headers/keyring.o 00:06:24.223 CXX test/cpp_headers/keyring_module.o 00:06:24.223 CXX test/cpp_headers/lvol.o 00:06:24.223 CXX test/cpp_headers/md5.o 00:06:24.223 CXX test/cpp_headers/likely.o 00:06:24.223 CXX test/cpp_headers/log.o 00:06:24.223 CXX test/cpp_headers/memory.o 00:06:24.223 CXX test/cpp_headers/mmio.o 00:06:24.223 CXX test/cpp_headers/nbd.o 00:06:24.223 CXX test/cpp_headers/net.o 00:06:24.223 CXX test/cpp_headers/nvme.o 00:06:24.223 CXX test/cpp_headers/notify.o 00:06:24.223 CXX test/cpp_headers/nvme_ocssd.o 00:06:24.223 CXX test/cpp_headers/nvme_intel.o 00:06:24.223 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:24.223 CC examples/ioat/verify/verify.o 00:06:24.223 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:24.223 CC examples/ioat/perf/perf.o 00:06:24.223 CXX test/cpp_headers/nvme_zns.o 00:06:24.223 CXX test/cpp_headers/nvme_spec.o 00:06:24.223 CC examples/util/zipf/zipf.o 00:06:24.223 CXX test/cpp_headers/nvmf_cmd.o 00:06:24.223 CXX test/cpp_headers/nvmf.o 00:06:24.223 CXX test/cpp_headers/nvmf_spec.o 00:06:24.223 CXX test/cpp_headers/nvmf_transport.o 00:06:24.223 CXX test/cpp_headers/opal_spec.o 00:06:24.223 CXX test/cpp_headers/opal.o 00:06:24.223 CC test/thread/poller_perf/poller_perf.o 00:06:24.223 CC test/app/jsoncat/jsoncat.o 00:06:24.223 CXX test/cpp_headers/pci_ids.o 00:06:24.223 LINK spdk_lspci 00:06:24.223 CXX test/cpp_headers/queue.o 00:06:24.223 CXX test/cpp_headers/reduce.o 00:06:24.223 CXX test/cpp_headers/pipe.o 00:06:24.223 CXX test/cpp_headers/rpc.o 00:06:24.223 CC test/env/vtophys/vtophys.o 00:06:24.223 CXX test/cpp_headers/scheduler.o 00:06:24.223 CXX test/cpp_headers/scsi_spec.o 00:06:24.491 CXX test/cpp_headers/scsi.o 00:06:24.491 CXX test/cpp_headers/sock.o 00:06:24.491 CC test/app/stub/stub.o 00:06:24.491 CXX test/cpp_headers/stdinc.o 00:06:24.491 CXX test/cpp_headers/string.o 00:06:24.491 CC test/env/memory/memory_ut.o 00:06:24.491 CXX test/cpp_headers/trace.o 00:06:24.491 CXX test/cpp_headers/thread.o 00:06:24.491 CXX test/cpp_headers/trace_parser.o 00:06:24.491 CXX test/cpp_headers/tree.o 00:06:24.491 CXX test/cpp_headers/util.o 00:06:24.491 CXX test/cpp_headers/ublk.o 00:06:24.491 CC app/fio/nvme/fio_plugin.o 00:06:24.491 CXX test/cpp_headers/uuid.o 00:06:24.491 CC test/app/histogram_perf/histogram_perf.o 00:06:24.491 CXX test/cpp_headers/vfio_user_spec.o 00:06:24.491 CXX test/cpp_headers/version.o 00:06:24.491 CXX test/cpp_headers/vfio_user_pci.o 00:06:24.491 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:24.491 CXX test/cpp_headers/vmd.o 00:06:24.491 CXX test/cpp_headers/vhost.o 00:06:24.491 CXX test/cpp_headers/xor.o 00:06:24.491 CXX test/cpp_headers/zipf.o 00:06:24.491 CC test/env/pci/pci_ut.o 00:06:24.491 CC test/dma/test_dma/test_dma.o 00:06:24.491 CC test/app/bdev_svc/bdev_svc.o 00:06:24.491 CC app/fio/bdev/fio_plugin.o 00:06:24.760 LINK interrupt_tgt 00:06:24.760 LINK spdk_nvme_discover 00:06:24.760 LINK iscsi_tgt 00:06:24.760 LINK nvmf_tgt 00:06:24.760 LINK rpc_client_test 00:06:24.760 LINK spdk_trace_record 00:06:25.023 LINK spdk_tgt 00:06:25.023 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:25.023 LINK spdk_trace 00:06:25.023 LINK env_dpdk_post_init 00:06:25.023 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:25.023 CC test/env/mem_callbacks/mem_callbacks.o 00:06:25.023 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:25.023 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:25.285 LINK poller_perf 00:06:25.285 LINK histogram_perf 00:06:25.285 LINK jsoncat 00:06:25.285 LINK zipf 00:06:25.285 LINK vtophys 00:06:25.545 LINK spdk_dd 00:06:25.545 LINK stub 00:06:25.545 LINK bdev_svc 00:06:25.545 LINK ioat_perf 00:06:25.545 LINK verify 00:06:25.545 LINK spdk_nvme 00:06:25.545 CC app/vhost/vhost.o 00:06:25.807 LINK nvme_fuzz 00:06:25.807 LINK pci_ut 00:06:25.807 LINK vhost_fuzz 00:06:25.807 LINK test_dma 00:06:25.807 LINK spdk_bdev 00:06:25.807 CC test/event/event_perf/event_perf.o 00:06:25.807 CC test/event/reactor/reactor.o 00:06:25.807 CC test/event/reactor_perf/reactor_perf.o 00:06:25.807 LINK vhost 00:06:25.807 CC test/event/app_repeat/app_repeat.o 00:06:25.807 CC test/event/scheduler/scheduler.o 00:06:25.807 LINK mem_callbacks 00:06:25.807 CC examples/sock/hello_world/hello_sock.o 00:06:25.807 LINK spdk_nvme_perf 00:06:26.069 CC examples/idxd/perf/perf.o 00:06:26.069 CC examples/vmd/led/led.o 00:06:26.069 LINK spdk_top 00:06:26.069 CC examples/vmd/lsvmd/lsvmd.o 00:06:26.069 LINK spdk_nvme_identify 00:06:26.069 CC examples/thread/thread/thread_ex.o 00:06:26.069 LINK reactor 00:06:26.069 LINK event_perf 00:06:26.069 LINK reactor_perf 00:06:26.069 LINK app_repeat 00:06:26.069 LINK lsvmd 00:06:26.069 LINK led 00:06:26.069 LINK hello_sock 00:06:26.069 LINK scheduler 00:06:26.330 LINK thread 00:06:26.330 LINK idxd_perf 00:06:26.330 CC test/nvme/aer/aer.o 00:06:26.330 CC test/nvme/sgl/sgl.o 00:06:26.330 CC test/nvme/overhead/overhead.o 00:06:26.330 CC test/nvme/connect_stress/connect_stress.o 00:06:26.330 CC test/nvme/compliance/nvme_compliance.o 00:06:26.330 CC test/nvme/reserve/reserve.o 00:06:26.330 CC test/nvme/simple_copy/simple_copy.o 00:06:26.330 CC test/nvme/cuse/cuse.o 00:06:26.330 CC test/nvme/fused_ordering/fused_ordering.o 00:06:26.330 CC test/nvme/err_injection/err_injection.o 00:06:26.330 CC test/nvme/startup/startup.o 00:06:26.330 CC test/nvme/reset/reset.o 00:06:26.330 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:26.330 CC test/nvme/e2edp/nvme_dp.o 00:06:26.330 CC test/nvme/boot_partition/boot_partition.o 00:06:26.330 CC test/nvme/fdp/fdp.o 00:06:26.591 LINK memory_ut 00:06:26.591 CC test/accel/dif/dif.o 00:06:26.591 CC test/blobfs/mkfs/mkfs.o 00:06:26.591 CC test/lvol/esnap/esnap.o 00:06:26.591 LINK err_injection 00:06:26.591 LINK boot_partition 00:06:26.591 LINK connect_stress 00:06:26.591 LINK startup 00:06:26.591 LINK fused_ordering 00:06:26.591 LINK doorbell_aers 00:06:26.852 LINK reserve 00:06:26.852 LINK simple_copy 00:06:26.852 LINK mkfs 00:06:26.852 LINK sgl 00:06:26.852 LINK reset 00:06:26.852 CC examples/nvme/arbitration/arbitration.o 00:06:26.852 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:26.852 CC examples/nvme/hello_world/hello_world.o 00:06:26.852 CC examples/nvme/abort/abort.o 00:06:26.852 LINK iscsi_fuzz 00:06:26.852 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:26.852 CC examples/nvme/reconnect/reconnect.o 00:06:26.852 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:26.852 CC examples/nvme/hotplug/hotplug.o 00:06:26.852 LINK aer 00:06:26.852 LINK nvme_dp 00:06:26.852 LINK overhead 00:06:26.852 LINK nvme_compliance 00:06:26.852 LINK fdp 00:06:26.852 CC examples/accel/perf/accel_perf.o 00:06:26.852 CC examples/blob/cli/blobcli.o 00:06:26.852 CC examples/blob/hello_world/hello_blob.o 00:06:26.852 CC examples/fsdev/hello_world/hello_fsdev.o 00:06:27.112 LINK pmr_persistence 00:06:27.112 LINK cmb_copy 00:06:27.112 LINK hello_world 00:06:27.112 LINK hotplug 00:06:27.112 LINK arbitration 00:06:27.112 LINK dif 00:06:27.112 LINK reconnect 00:06:27.112 LINK abort 00:06:27.112 LINK hello_blob 00:06:27.373 LINK nvme_manage 00:06:27.373 LINK hello_fsdev 00:06:27.373 LINK accel_perf 00:06:27.373 LINK blobcli 00:06:27.635 LINK cuse 00:06:27.635 CC test/bdev/bdevio/bdevio.o 00:06:27.895 CC examples/bdev/hello_world/hello_bdev.o 00:06:27.895 CC examples/bdev/bdevperf/bdevperf.o 00:06:28.156 LINK bdevio 00:06:28.156 LINK hello_bdev 00:06:28.726 LINK bdevperf 00:06:29.299 CC examples/nvmf/nvmf/nvmf.o 00:06:29.560 LINK nvmf 00:06:31.476 LINK esnap 00:06:31.476 00:06:31.476 real 0m55.907s 00:06:31.476 user 8m7.690s 00:06:31.476 sys 5m27.102s 00:06:31.476 07:01:42 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:31.476 07:01:42 make -- common/autotest_common.sh@10 -- $ set +x 00:06:31.476 ************************************ 00:06:31.476 END TEST make 00:06:31.476 ************************************ 00:06:31.476 07:01:42 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:31.476 07:01:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:31.476 07:01:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:31.476 07:01:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:31.476 07:01:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:06:31.476 07:01:42 -- pm/common@44 -- $ pid=2065768 00:06:31.476 07:01:42 -- pm/common@50 -- $ kill -TERM 2065768 00:06:31.476 07:01:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:31.476 07:01:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:06:31.476 07:01:42 -- pm/common@44 -- $ pid=2065769 00:06:31.476 07:01:42 -- pm/common@50 -- $ kill -TERM 2065769 00:06:31.476 07:01:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:31.476 07:01:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:06:31.476 07:01:42 -- pm/common@44 -- $ pid=2065771 00:06:31.476 07:01:42 -- pm/common@50 -- $ kill -TERM 2065771 00:06:31.476 07:01:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:31.476 07:01:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:06:31.476 07:01:42 -- pm/common@44 -- $ pid=2065795 00:06:31.476 07:01:42 -- pm/common@50 -- $ sudo -E kill -TERM 2065795 00:06:31.737 07:01:42 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:06:31.737 07:01:42 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:06:31.737 07:01:42 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:31.737 07:01:42 -- common/autotest_common.sh@1693 -- # lcov --version 00:06:31.737 07:01:42 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:31.737 07:01:42 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:31.737 07:01:42 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.737 07:01:42 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.737 07:01:42 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.737 07:01:42 -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.737 07:01:42 -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.737 07:01:42 -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.737 07:01:42 -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.737 07:01:42 -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.737 07:01:42 -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.737 07:01:42 -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.737 07:01:42 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.737 07:01:42 -- scripts/common.sh@344 -- # case "$op" in 00:06:31.737 07:01:42 -- scripts/common.sh@345 -- # : 1 00:06:31.737 07:01:42 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.737 07:01:42 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.737 07:01:42 -- scripts/common.sh@365 -- # decimal 1 00:06:31.737 07:01:42 -- scripts/common.sh@353 -- # local d=1 00:06:31.737 07:01:42 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.737 07:01:42 -- scripts/common.sh@355 -- # echo 1 00:06:31.737 07:01:42 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.737 07:01:42 -- scripts/common.sh@366 -- # decimal 2 00:06:31.737 07:01:42 -- scripts/common.sh@353 -- # local d=2 00:06:31.737 07:01:42 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.737 07:01:42 -- scripts/common.sh@355 -- # echo 2 00:06:31.737 07:01:42 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.737 07:01:42 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.737 07:01:42 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.737 07:01:42 -- scripts/common.sh@368 -- # return 0 00:06:31.737 07:01:42 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.737 07:01:42 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:31.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.737 --rc genhtml_branch_coverage=1 00:06:31.737 --rc genhtml_function_coverage=1 00:06:31.737 --rc genhtml_legend=1 00:06:31.737 --rc geninfo_all_blocks=1 00:06:31.737 --rc geninfo_unexecuted_blocks=1 00:06:31.737 00:06:31.737 ' 00:06:31.737 07:01:42 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:31.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.737 --rc genhtml_branch_coverage=1 00:06:31.737 --rc genhtml_function_coverage=1 00:06:31.737 --rc genhtml_legend=1 00:06:31.737 --rc geninfo_all_blocks=1 00:06:31.737 --rc geninfo_unexecuted_blocks=1 00:06:31.737 00:06:31.737 ' 00:06:31.737 07:01:42 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:31.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.737 --rc genhtml_branch_coverage=1 00:06:31.737 --rc genhtml_function_coverage=1 00:06:31.737 --rc genhtml_legend=1 00:06:31.737 --rc geninfo_all_blocks=1 00:06:31.737 --rc geninfo_unexecuted_blocks=1 00:06:31.737 00:06:31.737 ' 00:06:31.737 07:01:42 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:31.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.737 --rc genhtml_branch_coverage=1 00:06:31.737 --rc genhtml_function_coverage=1 00:06:31.737 --rc genhtml_legend=1 00:06:31.737 --rc geninfo_all_blocks=1 00:06:31.737 --rc geninfo_unexecuted_blocks=1 00:06:31.737 00:06:31.737 ' 00:06:31.737 07:01:42 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:31.737 07:01:42 -- nvmf/common.sh@7 -- # uname -s 00:06:31.737 07:01:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:31.737 07:01:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:31.737 07:01:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:31.737 07:01:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:31.737 07:01:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:31.737 07:01:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:31.737 07:01:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:31.737 07:01:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:31.737 07:01:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:31.737 07:01:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:31.737 07:01:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:31.737 07:01:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:31.737 07:01:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:31.737 07:01:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:31.737 07:01:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:31.737 07:01:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:31.737 07:01:42 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:31.737 07:01:42 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:31.737 07:01:42 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:31.738 07:01:42 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:31.738 07:01:42 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:31.738 07:01:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.738 07:01:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.738 07:01:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.738 07:01:42 -- paths/export.sh@5 -- # export PATH 00:06:31.738 07:01:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.738 07:01:42 -- nvmf/common.sh@51 -- # : 0 00:06:31.738 07:01:42 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:31.738 07:01:42 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:31.738 07:01:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:31.738 07:01:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:31.738 07:01:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:31.738 07:01:42 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:31.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:31.738 07:01:42 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:31.738 07:01:42 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:31.738 07:01:42 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:31.738 07:01:42 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:31.738 07:01:42 -- spdk/autotest.sh@32 -- # uname -s 00:06:31.738 07:01:42 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:31.738 07:01:42 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:31.738 07:01:42 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:06:31.999 07:01:42 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:06:31.999 07:01:42 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:06:31.999 07:01:42 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:31.999 07:01:42 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:31.999 07:01:42 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:31.999 07:01:42 -- spdk/autotest.sh@48 -- # udevadm_pid=2131312 00:06:31.999 07:01:42 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:31.999 07:01:42 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:31.999 07:01:42 -- pm/common@17 -- # local monitor 00:06:31.999 07:01:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:31.999 07:01:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:31.999 07:01:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:31.999 07:01:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:31.999 07:01:42 -- pm/common@21 -- # date +%s 00:06:31.999 07:01:42 -- pm/common@21 -- # date +%s 00:06:31.999 07:01:42 -- pm/common@25 -- # sleep 1 00:06:31.999 07:01:42 -- pm/common@21 -- # date +%s 00:06:31.999 07:01:42 -- pm/common@21 -- # date +%s 00:06:31.999 07:01:42 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732687302 00:06:31.999 07:01:42 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732687302 00:06:31.999 07:01:42 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732687302 00:06:31.999 07:01:42 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732687302 00:06:31.999 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732687302_collect-cpu-load.pm.log 00:06:31.999 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732687302_collect-vmstat.pm.log 00:06:31.999 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732687302_collect-cpu-temp.pm.log 00:06:31.999 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732687302_collect-bmc-pm.bmc.pm.log 00:06:32.942 07:01:43 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:32.942 07:01:43 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:32.942 07:01:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:32.942 07:01:43 -- common/autotest_common.sh@10 -- # set +x 00:06:32.942 07:01:43 -- spdk/autotest.sh@59 -- # create_test_list 00:06:32.942 07:01:43 -- common/autotest_common.sh@752 -- # xtrace_disable 00:06:32.942 07:01:43 -- common/autotest_common.sh@10 -- # set +x 00:06:32.942 07:01:44 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:06:32.942 07:01:44 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:32.942 07:01:44 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:32.942 07:01:44 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:06:32.942 07:01:44 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:32.942 07:01:44 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:32.942 07:01:44 -- common/autotest_common.sh@1457 -- # uname 00:06:32.942 07:01:44 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:06:32.942 07:01:44 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:32.942 07:01:44 -- common/autotest_common.sh@1477 -- # uname 00:06:32.942 07:01:44 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:06:32.942 07:01:44 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:32.942 07:01:44 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:32.942 lcov: LCOV version 1.15 00:06:32.942 07:01:44 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:06:51.056 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:51.056 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:07:06.082 07:02:14 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:07:06.082 07:02:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:06.082 07:02:14 -- common/autotest_common.sh@10 -- # set +x 00:07:06.082 07:02:14 -- spdk/autotest.sh@78 -- # rm -f 00:07:06.082 07:02:14 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:07.465 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:07:07.465 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:07:07.465 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:07:07.465 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:07:07.465 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:07:07.465 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:07:07.465 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:07:07.466 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:07:07.466 0000:65:00.0 (144d a80a): Already using the nvme driver 00:07:07.466 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:07:07.466 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:07:07.466 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:07:07.727 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:07:07.727 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:07:07.727 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:07:07.727 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:07:07.727 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:07:07.988 07:02:19 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:07:07.988 07:02:19 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:07.988 07:02:19 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:07.988 07:02:19 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:07:07.988 07:02:19 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:07.988 07:02:19 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:07:07.988 07:02:19 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:07.988 07:02:19 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:07.988 07:02:19 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:07.988 07:02:19 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:07:07.988 07:02:19 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:07.988 07:02:19 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:07.988 07:02:19 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:07:07.988 07:02:19 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:07:07.988 07:02:19 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:07:07.988 No valid GPT data, bailing 00:07:07.988 07:02:19 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:07.988 07:02:19 -- scripts/common.sh@394 -- # pt= 00:07:07.988 07:02:19 -- scripts/common.sh@395 -- # return 1 00:07:07.988 07:02:19 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:07:07.988 1+0 records in 00:07:07.988 1+0 records out 00:07:07.988 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.001738 s, 603 MB/s 00:07:07.988 07:02:19 -- spdk/autotest.sh@105 -- # sync 00:07:07.989 07:02:19 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:07:07.989 07:02:19 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:07:07.989 07:02:19 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:07:17.984 07:02:27 -- spdk/autotest.sh@111 -- # uname -s 00:07:17.984 07:02:27 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:07:17.984 07:02:27 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:07:17.984 07:02:27 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:07:20.525 Hugepages 00:07:20.525 node hugesize free / total 00:07:20.525 node0 1048576kB 0 / 0 00:07:20.525 node0 2048kB 0 / 0 00:07:20.525 node1 1048576kB 0 / 0 00:07:20.525 node1 2048kB 0 / 0 00:07:20.525 00:07:20.525 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:20.525 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:07:20.525 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:07:20.525 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:07:20.525 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:07:20.525 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:07:20.525 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:07:20.525 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:07:20.525 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:07:20.525 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:07:20.525 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:07:20.525 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:07:20.525 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:07:20.525 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:07:20.525 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:07:20.525 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:07:20.525 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:07:20.525 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:07:20.525 07:02:31 -- spdk/autotest.sh@117 -- # uname -s 00:07:20.525 07:02:31 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:07:20.525 07:02:31 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:07:20.525 07:02:31 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:24.017 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:07:24.017 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:07:24.017 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:07:24.017 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:07:24.017 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:07:24.017 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:07:24.017 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:07:24.017 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:07:24.017 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:07:24.017 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:07:24.017 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:07:24.017 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:07:24.017 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:07:24.017 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:07:24.017 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:07:24.017 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:07:25.934 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:07:26.195 07:02:37 -- common/autotest_common.sh@1517 -- # sleep 1 00:07:27.143 07:02:38 -- common/autotest_common.sh@1518 -- # bdfs=() 00:07:27.143 07:02:38 -- common/autotest_common.sh@1518 -- # local bdfs 00:07:27.143 07:02:38 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:07:27.144 07:02:38 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:07:27.144 07:02:38 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:27.144 07:02:38 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:27.144 07:02:38 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:27.144 07:02:38 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:07:27.144 07:02:38 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:27.144 07:02:38 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:07:27.144 07:02:38 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:07:27.144 07:02:38 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:30.450 Waiting for block devices as requested 00:07:30.710 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:07:30.710 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:07:30.710 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:07:30.970 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:07:30.970 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:07:30.970 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:07:31.231 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:07:31.231 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:07:31.231 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:07:31.491 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:07:31.491 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:07:31.752 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:07:31.752 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:07:31.752 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:07:32.013 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:07:32.013 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:07:32.013 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:07:32.585 07:02:43 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:32.585 07:02:43 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:07:32.585 07:02:43 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:07:32.585 07:02:43 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:07:32.585 07:02:43 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:07:32.585 07:02:43 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:07:32.585 07:02:43 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:07:32.585 07:02:43 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:07:32.585 07:02:43 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:07:32.585 07:02:43 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:07:32.585 07:02:43 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:07:32.585 07:02:43 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:32.585 07:02:43 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:32.585 07:02:43 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:07:32.585 07:02:43 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:32.585 07:02:43 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:32.585 07:02:43 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:07:32.585 07:02:43 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:32.585 07:02:43 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:32.585 07:02:43 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:32.585 07:02:43 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:32.585 07:02:43 -- common/autotest_common.sh@1543 -- # continue 00:07:32.585 07:02:43 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:07:32.585 07:02:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:32.585 07:02:43 -- common/autotest_common.sh@10 -- # set +x 00:07:32.585 07:02:43 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:07:32.585 07:02:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:32.585 07:02:43 -- common/autotest_common.sh@10 -- # set +x 00:07:32.585 07:02:43 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:35.889 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:07:35.889 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:07:35.889 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:07:35.889 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:07:35.889 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:07:36.150 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:07:36.150 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:07:36.150 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:07:36.150 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:07:36.150 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:07:36.150 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:07:36.150 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:07:36.150 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:07:36.150 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:07:36.150 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:07:36.150 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:07:36.150 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:07:36.722 07:02:47 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:07:36.722 07:02:47 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:36.722 07:02:47 -- common/autotest_common.sh@10 -- # set +x 00:07:36.722 07:02:47 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:07:36.722 07:02:47 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:07:36.722 07:02:47 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:07:36.722 07:02:47 -- common/autotest_common.sh@1563 -- # bdfs=() 00:07:36.722 07:02:47 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:07:36.722 07:02:47 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:07:36.722 07:02:47 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:07:36.722 07:02:47 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:07:36.722 07:02:47 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:36.722 07:02:47 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:36.722 07:02:47 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:36.722 07:02:47 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:07:36.722 07:02:47 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:36.722 07:02:47 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:07:36.722 07:02:47 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:07:36.722 07:02:47 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:36.722 07:02:47 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:07:36.722 07:02:47 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:07:36.722 07:02:47 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:07:36.722 07:02:47 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:07:36.722 07:02:47 -- common/autotest_common.sh@1572 -- # return 0 00:07:36.722 07:02:47 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:07:36.722 07:02:47 -- common/autotest_common.sh@1580 -- # return 0 00:07:36.722 07:02:47 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:07:36.722 07:02:47 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:07:36.722 07:02:47 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:36.722 07:02:47 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:36.722 07:02:47 -- spdk/autotest.sh@149 -- # timing_enter lib 00:07:36.722 07:02:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:36.722 07:02:47 -- common/autotest_common.sh@10 -- # set +x 00:07:36.722 07:02:47 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:07:36.722 07:02:47 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:07:36.722 07:02:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.722 07:02:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.722 07:02:47 -- common/autotest_common.sh@10 -- # set +x 00:07:36.722 ************************************ 00:07:36.722 START TEST env 00:07:36.722 ************************************ 00:07:36.722 07:02:47 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:07:36.983 * Looking for test storage... 00:07:36.983 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:07:36.983 07:02:47 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:36.983 07:02:47 env -- common/autotest_common.sh@1693 -- # lcov --version 00:07:36.983 07:02:47 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:36.983 07:02:48 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:36.983 07:02:48 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:36.983 07:02:48 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:36.983 07:02:48 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:36.983 07:02:48 env -- scripts/common.sh@336 -- # IFS=.-: 00:07:36.983 07:02:48 env -- scripts/common.sh@336 -- # read -ra ver1 00:07:36.983 07:02:48 env -- scripts/common.sh@337 -- # IFS=.-: 00:07:36.983 07:02:48 env -- scripts/common.sh@337 -- # read -ra ver2 00:07:36.983 07:02:48 env -- scripts/common.sh@338 -- # local 'op=<' 00:07:36.983 07:02:48 env -- scripts/common.sh@340 -- # ver1_l=2 00:07:36.983 07:02:48 env -- scripts/common.sh@341 -- # ver2_l=1 00:07:36.983 07:02:48 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:36.983 07:02:48 env -- scripts/common.sh@344 -- # case "$op" in 00:07:36.983 07:02:48 env -- scripts/common.sh@345 -- # : 1 00:07:36.983 07:02:48 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:36.983 07:02:48 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:36.983 07:02:48 env -- scripts/common.sh@365 -- # decimal 1 00:07:36.983 07:02:48 env -- scripts/common.sh@353 -- # local d=1 00:07:36.983 07:02:48 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:36.983 07:02:48 env -- scripts/common.sh@355 -- # echo 1 00:07:36.983 07:02:48 env -- scripts/common.sh@365 -- # ver1[v]=1 00:07:36.983 07:02:48 env -- scripts/common.sh@366 -- # decimal 2 00:07:36.983 07:02:48 env -- scripts/common.sh@353 -- # local d=2 00:07:36.983 07:02:48 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:36.983 07:02:48 env -- scripts/common.sh@355 -- # echo 2 00:07:36.983 07:02:48 env -- scripts/common.sh@366 -- # ver2[v]=2 00:07:36.983 07:02:48 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:36.983 07:02:48 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:36.983 07:02:48 env -- scripts/common.sh@368 -- # return 0 00:07:36.983 07:02:48 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:36.983 07:02:48 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:36.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.983 --rc genhtml_branch_coverage=1 00:07:36.983 --rc genhtml_function_coverage=1 00:07:36.983 --rc genhtml_legend=1 00:07:36.983 --rc geninfo_all_blocks=1 00:07:36.983 --rc geninfo_unexecuted_blocks=1 00:07:36.983 00:07:36.983 ' 00:07:36.983 07:02:48 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:36.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.983 --rc genhtml_branch_coverage=1 00:07:36.983 --rc genhtml_function_coverage=1 00:07:36.983 --rc genhtml_legend=1 00:07:36.983 --rc geninfo_all_blocks=1 00:07:36.983 --rc geninfo_unexecuted_blocks=1 00:07:36.983 00:07:36.983 ' 00:07:36.983 07:02:48 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:36.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.983 --rc genhtml_branch_coverage=1 00:07:36.983 --rc genhtml_function_coverage=1 00:07:36.983 --rc genhtml_legend=1 00:07:36.983 --rc geninfo_all_blocks=1 00:07:36.983 --rc geninfo_unexecuted_blocks=1 00:07:36.983 00:07:36.983 ' 00:07:36.983 07:02:48 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:36.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.983 --rc genhtml_branch_coverage=1 00:07:36.983 --rc genhtml_function_coverage=1 00:07:36.983 --rc genhtml_legend=1 00:07:36.983 --rc geninfo_all_blocks=1 00:07:36.983 --rc geninfo_unexecuted_blocks=1 00:07:36.983 00:07:36.983 ' 00:07:36.983 07:02:48 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:07:36.983 07:02:48 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.983 07:02:48 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.983 07:02:48 env -- common/autotest_common.sh@10 -- # set +x 00:07:36.983 ************************************ 00:07:36.983 START TEST env_memory 00:07:36.983 ************************************ 00:07:36.983 07:02:48 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:07:36.983 00:07:36.983 00:07:36.983 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.983 http://cunit.sourceforge.net/ 00:07:36.983 00:07:36.983 00:07:36.983 Suite: memory 00:07:36.983 Test: alloc and free memory map ...[2024-11-27 07:02:48.146494] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:36.983 passed 00:07:36.983 Test: mem map translation ...[2024-11-27 07:02:48.172254] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:36.983 [2024-11-27 07:02:48.172285] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:36.983 [2024-11-27 07:02:48.172333] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:36.983 [2024-11-27 07:02:48.172341] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:37.244 passed 00:07:37.244 Test: mem map registration ...[2024-11-27 07:02:48.227623] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:07:37.244 [2024-11-27 07:02:48.227649] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:07:37.244 passed 00:07:37.244 Test: mem map adjacent registrations ...passed 00:07:37.244 00:07:37.244 Run Summary: Type Total Ran Passed Failed Inactive 00:07:37.244 suites 1 1 n/a 0 0 00:07:37.244 tests 4 4 4 0 0 00:07:37.244 asserts 152 152 152 0 n/a 00:07:37.244 00:07:37.244 Elapsed time = 0.192 seconds 00:07:37.244 00:07:37.244 real 0m0.207s 00:07:37.244 user 0m0.196s 00:07:37.244 sys 0m0.010s 00:07:37.244 07:02:48 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.244 07:02:48 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:37.244 ************************************ 00:07:37.244 END TEST env_memory 00:07:37.244 ************************************ 00:07:37.244 07:02:48 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:07:37.244 07:02:48 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:37.244 07:02:48 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.244 07:02:48 env -- common/autotest_common.sh@10 -- # set +x 00:07:37.244 ************************************ 00:07:37.244 START TEST env_vtophys 00:07:37.244 ************************************ 00:07:37.244 07:02:48 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:07:37.244 EAL: lib.eal log level changed from notice to debug 00:07:37.244 EAL: Detected lcore 0 as core 0 on socket 0 00:07:37.244 EAL: Detected lcore 1 as core 1 on socket 0 00:07:37.244 EAL: Detected lcore 2 as core 2 on socket 0 00:07:37.245 EAL: Detected lcore 3 as core 3 on socket 0 00:07:37.245 EAL: Detected lcore 4 as core 4 on socket 0 00:07:37.245 EAL: Detected lcore 5 as core 5 on socket 0 00:07:37.245 EAL: Detected lcore 6 as core 6 on socket 0 00:07:37.245 EAL: Detected lcore 7 as core 7 on socket 0 00:07:37.245 EAL: Detected lcore 8 as core 8 on socket 0 00:07:37.245 EAL: Detected lcore 9 as core 9 on socket 0 00:07:37.245 EAL: Detected lcore 10 as core 10 on socket 0 00:07:37.245 EAL: Detected lcore 11 as core 11 on socket 0 00:07:37.245 EAL: Detected lcore 12 as core 12 on socket 0 00:07:37.245 EAL: Detected lcore 13 as core 13 on socket 0 00:07:37.245 EAL: Detected lcore 14 as core 14 on socket 0 00:07:37.245 EAL: Detected lcore 15 as core 15 on socket 0 00:07:37.245 EAL: Detected lcore 16 as core 16 on socket 0 00:07:37.245 EAL: Detected lcore 17 as core 17 on socket 0 00:07:37.245 EAL: Detected lcore 18 as core 18 on socket 0 00:07:37.245 EAL: Detected lcore 19 as core 19 on socket 0 00:07:37.245 EAL: Detected lcore 20 as core 20 on socket 0 00:07:37.245 EAL: Detected lcore 21 as core 21 on socket 0 00:07:37.245 EAL: Detected lcore 22 as core 22 on socket 0 00:07:37.245 EAL: Detected lcore 23 as core 23 on socket 0 00:07:37.245 EAL: Detected lcore 24 as core 24 on socket 0 00:07:37.245 EAL: Detected lcore 25 as core 25 on socket 0 00:07:37.245 EAL: Detected lcore 26 as core 26 on socket 0 00:07:37.245 EAL: Detected lcore 27 as core 27 on socket 0 00:07:37.245 EAL: Detected lcore 28 as core 28 on socket 0 00:07:37.245 EAL: Detected lcore 29 as core 29 on socket 0 00:07:37.245 EAL: Detected lcore 30 as core 30 on socket 0 00:07:37.245 EAL: Detected lcore 31 as core 31 on socket 0 00:07:37.245 EAL: Detected lcore 32 as core 32 on socket 0 00:07:37.245 EAL: Detected lcore 33 as core 33 on socket 0 00:07:37.245 EAL: Detected lcore 34 as core 34 on socket 0 00:07:37.245 EAL: Detected lcore 35 as core 35 on socket 0 00:07:37.245 EAL: Detected lcore 36 as core 0 on socket 1 00:07:37.245 EAL: Detected lcore 37 as core 1 on socket 1 00:07:37.245 EAL: Detected lcore 38 as core 2 on socket 1 00:07:37.245 EAL: Detected lcore 39 as core 3 on socket 1 00:07:37.245 EAL: Detected lcore 40 as core 4 on socket 1 00:07:37.245 EAL: Detected lcore 41 as core 5 on socket 1 00:07:37.245 EAL: Detected lcore 42 as core 6 on socket 1 00:07:37.245 EAL: Detected lcore 43 as core 7 on socket 1 00:07:37.245 EAL: Detected lcore 44 as core 8 on socket 1 00:07:37.245 EAL: Detected lcore 45 as core 9 on socket 1 00:07:37.245 EAL: Detected lcore 46 as core 10 on socket 1 00:07:37.245 EAL: Detected lcore 47 as core 11 on socket 1 00:07:37.245 EAL: Detected lcore 48 as core 12 on socket 1 00:07:37.245 EAL: Detected lcore 49 as core 13 on socket 1 00:07:37.245 EAL: Detected lcore 50 as core 14 on socket 1 00:07:37.245 EAL: Detected lcore 51 as core 15 on socket 1 00:07:37.245 EAL: Detected lcore 52 as core 16 on socket 1 00:07:37.245 EAL: Detected lcore 53 as core 17 on socket 1 00:07:37.245 EAL: Detected lcore 54 as core 18 on socket 1 00:07:37.245 EAL: Detected lcore 55 as core 19 on socket 1 00:07:37.245 EAL: Detected lcore 56 as core 20 on socket 1 00:07:37.245 EAL: Detected lcore 57 as core 21 on socket 1 00:07:37.245 EAL: Detected lcore 58 as core 22 on socket 1 00:07:37.245 EAL: Detected lcore 59 as core 23 on socket 1 00:07:37.245 EAL: Detected lcore 60 as core 24 on socket 1 00:07:37.245 EAL: Detected lcore 61 as core 25 on socket 1 00:07:37.245 EAL: Detected lcore 62 as core 26 on socket 1 00:07:37.245 EAL: Detected lcore 63 as core 27 on socket 1 00:07:37.245 EAL: Detected lcore 64 as core 28 on socket 1 00:07:37.245 EAL: Detected lcore 65 as core 29 on socket 1 00:07:37.245 EAL: Detected lcore 66 as core 30 on socket 1 00:07:37.245 EAL: Detected lcore 67 as core 31 on socket 1 00:07:37.245 EAL: Detected lcore 68 as core 32 on socket 1 00:07:37.245 EAL: Detected lcore 69 as core 33 on socket 1 00:07:37.245 EAL: Detected lcore 70 as core 34 on socket 1 00:07:37.245 EAL: Detected lcore 71 as core 35 on socket 1 00:07:37.245 EAL: Detected lcore 72 as core 0 on socket 0 00:07:37.245 EAL: Detected lcore 73 as core 1 on socket 0 00:07:37.245 EAL: Detected lcore 74 as core 2 on socket 0 00:07:37.245 EAL: Detected lcore 75 as core 3 on socket 0 00:07:37.245 EAL: Detected lcore 76 as core 4 on socket 0 00:07:37.245 EAL: Detected lcore 77 as core 5 on socket 0 00:07:37.245 EAL: Detected lcore 78 as core 6 on socket 0 00:07:37.245 EAL: Detected lcore 79 as core 7 on socket 0 00:07:37.245 EAL: Detected lcore 80 as core 8 on socket 0 00:07:37.245 EAL: Detected lcore 81 as core 9 on socket 0 00:07:37.245 EAL: Detected lcore 82 as core 10 on socket 0 00:07:37.245 EAL: Detected lcore 83 as core 11 on socket 0 00:07:37.245 EAL: Detected lcore 84 as core 12 on socket 0 00:07:37.245 EAL: Detected lcore 85 as core 13 on socket 0 00:07:37.245 EAL: Detected lcore 86 as core 14 on socket 0 00:07:37.245 EAL: Detected lcore 87 as core 15 on socket 0 00:07:37.245 EAL: Detected lcore 88 as core 16 on socket 0 00:07:37.245 EAL: Detected lcore 89 as core 17 on socket 0 00:07:37.245 EAL: Detected lcore 90 as core 18 on socket 0 00:07:37.245 EAL: Detected lcore 91 as core 19 on socket 0 00:07:37.245 EAL: Detected lcore 92 as core 20 on socket 0 00:07:37.245 EAL: Detected lcore 93 as core 21 on socket 0 00:07:37.245 EAL: Detected lcore 94 as core 22 on socket 0 00:07:37.245 EAL: Detected lcore 95 as core 23 on socket 0 00:07:37.245 EAL: Detected lcore 96 as core 24 on socket 0 00:07:37.245 EAL: Detected lcore 97 as core 25 on socket 0 00:07:37.245 EAL: Detected lcore 98 as core 26 on socket 0 00:07:37.245 EAL: Detected lcore 99 as core 27 on socket 0 00:07:37.245 EAL: Detected lcore 100 as core 28 on socket 0 00:07:37.245 EAL: Detected lcore 101 as core 29 on socket 0 00:07:37.245 EAL: Detected lcore 102 as core 30 on socket 0 00:07:37.245 EAL: Detected lcore 103 as core 31 on socket 0 00:07:37.245 EAL: Detected lcore 104 as core 32 on socket 0 00:07:37.245 EAL: Detected lcore 105 as core 33 on socket 0 00:07:37.245 EAL: Detected lcore 106 as core 34 on socket 0 00:07:37.245 EAL: Detected lcore 107 as core 35 on socket 0 00:07:37.245 EAL: Detected lcore 108 as core 0 on socket 1 00:07:37.245 EAL: Detected lcore 109 as core 1 on socket 1 00:07:37.245 EAL: Detected lcore 110 as core 2 on socket 1 00:07:37.245 EAL: Detected lcore 111 as core 3 on socket 1 00:07:37.245 EAL: Detected lcore 112 as core 4 on socket 1 00:07:37.245 EAL: Detected lcore 113 as core 5 on socket 1 00:07:37.245 EAL: Detected lcore 114 as core 6 on socket 1 00:07:37.245 EAL: Detected lcore 115 as core 7 on socket 1 00:07:37.245 EAL: Detected lcore 116 as core 8 on socket 1 00:07:37.245 EAL: Detected lcore 117 as core 9 on socket 1 00:07:37.245 EAL: Detected lcore 118 as core 10 on socket 1 00:07:37.245 EAL: Detected lcore 119 as core 11 on socket 1 00:07:37.245 EAL: Detected lcore 120 as core 12 on socket 1 00:07:37.245 EAL: Detected lcore 121 as core 13 on socket 1 00:07:37.245 EAL: Detected lcore 122 as core 14 on socket 1 00:07:37.245 EAL: Detected lcore 123 as core 15 on socket 1 00:07:37.245 EAL: Detected lcore 124 as core 16 on socket 1 00:07:37.245 EAL: Detected lcore 125 as core 17 on socket 1 00:07:37.245 EAL: Detected lcore 126 as core 18 on socket 1 00:07:37.245 EAL: Detected lcore 127 as core 19 on socket 1 00:07:37.245 EAL: Skipped lcore 128 as core 20 on socket 1 00:07:37.245 EAL: Skipped lcore 129 as core 21 on socket 1 00:07:37.245 EAL: Skipped lcore 130 as core 22 on socket 1 00:07:37.245 EAL: Skipped lcore 131 as core 23 on socket 1 00:07:37.245 EAL: Skipped lcore 132 as core 24 on socket 1 00:07:37.245 EAL: Skipped lcore 133 as core 25 on socket 1 00:07:37.245 EAL: Skipped lcore 134 as core 26 on socket 1 00:07:37.245 EAL: Skipped lcore 135 as core 27 on socket 1 00:07:37.245 EAL: Skipped lcore 136 as core 28 on socket 1 00:07:37.245 EAL: Skipped lcore 137 as core 29 on socket 1 00:07:37.245 EAL: Skipped lcore 138 as core 30 on socket 1 00:07:37.245 EAL: Skipped lcore 139 as core 31 on socket 1 00:07:37.245 EAL: Skipped lcore 140 as core 32 on socket 1 00:07:37.245 EAL: Skipped lcore 141 as core 33 on socket 1 00:07:37.245 EAL: Skipped lcore 142 as core 34 on socket 1 00:07:37.245 EAL: Skipped lcore 143 as core 35 on socket 1 00:07:37.245 EAL: Maximum logical cores by configuration: 128 00:07:37.245 EAL: Detected CPU lcores: 128 00:07:37.245 EAL: Detected NUMA nodes: 2 00:07:37.245 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:37.245 EAL: Detected shared linkage of DPDK 00:07:37.245 EAL: No shared files mode enabled, IPC will be disabled 00:07:37.506 EAL: Bus pci wants IOVA as 'DC' 00:07:37.506 EAL: Buses did not request a specific IOVA mode. 00:07:37.506 EAL: IOMMU is available, selecting IOVA as VA mode. 00:07:37.506 EAL: Selected IOVA mode 'VA' 00:07:37.506 EAL: Probing VFIO support... 00:07:37.506 EAL: IOMMU type 1 (Type 1) is supported 00:07:37.506 EAL: IOMMU type 7 (sPAPR) is not supported 00:07:37.506 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:07:37.506 EAL: VFIO support initialized 00:07:37.506 EAL: Ask a virtual area of 0x2e000 bytes 00:07:37.506 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:37.506 EAL: Setting up physically contiguous memory... 00:07:37.506 EAL: Setting maximum number of open files to 524288 00:07:37.506 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:37.506 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:07:37.506 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:37.506 EAL: Ask a virtual area of 0x61000 bytes 00:07:37.506 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:37.506 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:37.506 EAL: Ask a virtual area of 0x400000000 bytes 00:07:37.506 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:37.506 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:37.506 EAL: Ask a virtual area of 0x61000 bytes 00:07:37.506 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:37.506 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:37.506 EAL: Ask a virtual area of 0x400000000 bytes 00:07:37.506 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:37.506 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:37.507 EAL: Ask a virtual area of 0x61000 bytes 00:07:37.507 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:37.507 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:37.507 EAL: Ask a virtual area of 0x400000000 bytes 00:07:37.507 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:37.507 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:37.507 EAL: Ask a virtual area of 0x61000 bytes 00:07:37.507 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:37.507 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:37.507 EAL: Ask a virtual area of 0x400000000 bytes 00:07:37.507 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:37.507 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:37.507 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:07:37.507 EAL: Ask a virtual area of 0x61000 bytes 00:07:37.507 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:07:37.507 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:37.507 EAL: Ask a virtual area of 0x400000000 bytes 00:07:37.507 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:07:37.507 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:07:37.507 EAL: Ask a virtual area of 0x61000 bytes 00:07:37.507 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:07:37.507 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:37.507 EAL: Ask a virtual area of 0x400000000 bytes 00:07:37.507 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:07:37.507 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:07:37.507 EAL: Ask a virtual area of 0x61000 bytes 00:07:37.507 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:07:37.507 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:37.507 EAL: Ask a virtual area of 0x400000000 bytes 00:07:37.507 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:07:37.507 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:07:37.507 EAL: Ask a virtual area of 0x61000 bytes 00:07:37.507 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:07:37.507 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:37.507 EAL: Ask a virtual area of 0x400000000 bytes 00:07:37.507 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:07:37.507 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:07:37.507 EAL: Hugepages will be freed exactly as allocated. 00:07:37.507 EAL: No shared files mode enabled, IPC is disabled 00:07:37.507 EAL: No shared files mode enabled, IPC is disabled 00:07:37.507 EAL: TSC frequency is ~2400000 KHz 00:07:37.507 EAL: Main lcore 0 is ready (tid=7f8986a0ca00;cpuset=[0]) 00:07:37.507 EAL: Trying to obtain current memory policy. 00:07:37.507 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:37.507 EAL: Restoring previous memory policy: 0 00:07:37.507 EAL: request: mp_malloc_sync 00:07:37.507 EAL: No shared files mode enabled, IPC is disabled 00:07:37.507 EAL: Heap on socket 0 was expanded by 2MB 00:07:37.507 EAL: No shared files mode enabled, IPC is disabled 00:07:37.507 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:37.507 EAL: Mem event callback 'spdk:(nil)' registered 00:07:37.507 00:07:37.507 00:07:37.507 CUnit - A unit testing framework for C - Version 2.1-3 00:07:37.507 http://cunit.sourceforge.net/ 00:07:37.507 00:07:37.507 00:07:37.507 Suite: components_suite 00:07:37.507 Test: vtophys_malloc_test ...passed 00:07:37.507 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:37.507 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:37.507 EAL: Restoring previous memory policy: 4 00:07:37.507 EAL: Calling mem event callback 'spdk:(nil)' 00:07:37.507 EAL: request: mp_malloc_sync 00:07:37.507 EAL: No shared files mode enabled, IPC is disabled 00:07:37.507 EAL: Heap on socket 0 was expanded by 4MB 00:07:37.507 EAL: Calling mem event callback 'spdk:(nil)' 00:07:37.507 EAL: request: mp_malloc_sync 00:07:37.507 EAL: No shared files mode enabled, IPC is disabled 00:07:37.507 EAL: Heap on socket 0 was shrunk by 4MB 00:07:37.507 EAL: Trying to obtain current memory policy. 00:07:37.507 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:37.507 EAL: Restoring previous memory policy: 4 00:07:37.507 EAL: Calling mem event callback 'spdk:(nil)' 00:07:37.507 EAL: request: mp_malloc_sync 00:07:37.507 EAL: No shared files mode enabled, IPC is disabled 00:07:37.507 EAL: Heap on socket 0 was expanded by 6MB 00:07:37.507 EAL: Calling mem event callback 'spdk:(nil)' 00:07:37.507 EAL: request: mp_malloc_sync 00:07:37.507 EAL: No shared files mode enabled, IPC is disabled 00:07:37.507 EAL: Heap on socket 0 was shrunk by 6MB 00:07:37.507 EAL: Trying to obtain current memory policy. 00:07:37.507 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:37.507 EAL: Restoring previous memory policy: 4 00:07:37.507 EAL: Calling mem event callback 'spdk:(nil)' 00:07:37.507 EAL: request: mp_malloc_sync 00:07:37.507 EAL: No shared files mode enabled, IPC is disabled 00:07:37.507 EAL: Heap on socket 0 was expanded by 10MB 00:07:37.507 EAL: Calling mem event callback 'spdk:(nil)' 00:07:37.507 EAL: request: mp_malloc_sync 00:07:37.507 EAL: No shared files mode enabled, IPC is disabled 00:07:37.507 EAL: Heap on socket 0 was shrunk by 10MB 00:07:37.507 EAL: Trying to obtain current memory policy. 00:07:37.507 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:37.507 EAL: Restoring previous memory policy: 4 00:07:37.507 EAL: Calling mem event callback 'spdk:(nil)' 00:07:37.507 EAL: request: mp_malloc_sync 00:07:37.507 EAL: No shared files mode enabled, IPC is disabled 00:07:37.507 EAL: Heap on socket 0 was expanded by 18MB 00:07:37.507 EAL: Calling mem event callback 'spdk:(nil)' 00:07:37.507 EAL: request: mp_malloc_sync 00:07:37.507 EAL: No shared files mode enabled, IPC is disabled 00:07:37.507 EAL: Heap on socket 0 was shrunk by 18MB 00:07:37.507 EAL: Trying to obtain current memory policy. 00:07:37.507 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:37.507 EAL: Restoring previous memory policy: 4 00:07:37.507 EAL: Calling mem event callback 'spdk:(nil)' 00:07:37.507 EAL: request: mp_malloc_sync 00:07:37.507 EAL: No shared files mode enabled, IPC is disabled 00:07:37.507 EAL: Heap on socket 0 was expanded by 34MB 00:07:37.507 EAL: Calling mem event callback 'spdk:(nil)' 00:07:37.507 EAL: request: mp_malloc_sync 00:07:37.507 EAL: No shared files mode enabled, IPC is disabled 00:07:37.507 EAL: Heap on socket 0 was shrunk by 34MB 00:07:37.507 EAL: Trying to obtain current memory policy. 00:07:37.507 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:37.507 EAL: Restoring previous memory policy: 4 00:07:37.507 EAL: Calling mem event callback 'spdk:(nil)' 00:07:37.507 EAL: request: mp_malloc_sync 00:07:37.507 EAL: No shared files mode enabled, IPC is disabled 00:07:37.507 EAL: Heap on socket 0 was expanded by 66MB 00:07:37.507 EAL: Calling mem event callback 'spdk:(nil)' 00:07:37.507 EAL: request: mp_malloc_sync 00:07:37.507 EAL: No shared files mode enabled, IPC is disabled 00:07:37.507 EAL: Heap on socket 0 was shrunk by 66MB 00:07:37.507 EAL: Trying to obtain current memory policy. 00:07:37.507 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:37.507 EAL: Restoring previous memory policy: 4 00:07:37.507 EAL: Calling mem event callback 'spdk:(nil)' 00:07:37.507 EAL: request: mp_malloc_sync 00:07:37.507 EAL: No shared files mode enabled, IPC is disabled 00:07:37.507 EAL: Heap on socket 0 was expanded by 130MB 00:07:37.507 EAL: Calling mem event callback 'spdk:(nil)' 00:07:37.507 EAL: request: mp_malloc_sync 00:07:37.507 EAL: No shared files mode enabled, IPC is disabled 00:07:37.507 EAL: Heap on socket 0 was shrunk by 130MB 00:07:37.507 EAL: Trying to obtain current memory policy. 00:07:37.507 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:37.507 EAL: Restoring previous memory policy: 4 00:07:37.507 EAL: Calling mem event callback 'spdk:(nil)' 00:07:37.507 EAL: request: mp_malloc_sync 00:07:37.507 EAL: No shared files mode enabled, IPC is disabled 00:07:37.507 EAL: Heap on socket 0 was expanded by 258MB 00:07:37.507 EAL: Calling mem event callback 'spdk:(nil)' 00:07:37.507 EAL: request: mp_malloc_sync 00:07:37.507 EAL: No shared files mode enabled, IPC is disabled 00:07:37.507 EAL: Heap on socket 0 was shrunk by 258MB 00:07:37.507 EAL: Trying to obtain current memory policy. 00:07:37.507 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:37.767 EAL: Restoring previous memory policy: 4 00:07:37.767 EAL: Calling mem event callback 'spdk:(nil)' 00:07:37.767 EAL: request: mp_malloc_sync 00:07:37.767 EAL: No shared files mode enabled, IPC is disabled 00:07:37.767 EAL: Heap on socket 0 was expanded by 514MB 00:07:37.767 EAL: Calling mem event callback 'spdk:(nil)' 00:07:37.767 EAL: request: mp_malloc_sync 00:07:37.767 EAL: No shared files mode enabled, IPC is disabled 00:07:37.767 EAL: Heap on socket 0 was shrunk by 514MB 00:07:37.767 EAL: Trying to obtain current memory policy. 00:07:37.767 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:38.028 EAL: Restoring previous memory policy: 4 00:07:38.028 EAL: Calling mem event callback 'spdk:(nil)' 00:07:38.028 EAL: request: mp_malloc_sync 00:07:38.028 EAL: No shared files mode enabled, IPC is disabled 00:07:38.028 EAL: Heap on socket 0 was expanded by 1026MB 00:07:38.028 EAL: Calling mem event callback 'spdk:(nil)' 00:07:38.028 EAL: request: mp_malloc_sync 00:07:38.028 EAL: No shared files mode enabled, IPC is disabled 00:07:38.028 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:38.028 passed 00:07:38.028 00:07:38.028 Run Summary: Type Total Ran Passed Failed Inactive 00:07:38.028 suites 1 1 n/a 0 0 00:07:38.028 tests 2 2 2 0 0 00:07:38.028 asserts 497 497 497 0 n/a 00:07:38.028 00:07:38.028 Elapsed time = 0.685 seconds 00:07:38.028 EAL: Calling mem event callback 'spdk:(nil)' 00:07:38.028 EAL: request: mp_malloc_sync 00:07:38.028 EAL: No shared files mode enabled, IPC is disabled 00:07:38.028 EAL: Heap on socket 0 was shrunk by 2MB 00:07:38.028 EAL: No shared files mode enabled, IPC is disabled 00:07:38.028 EAL: No shared files mode enabled, IPC is disabled 00:07:38.028 EAL: No shared files mode enabled, IPC is disabled 00:07:38.028 00:07:38.028 real 0m0.840s 00:07:38.028 user 0m0.444s 00:07:38.028 sys 0m0.369s 00:07:38.028 07:02:49 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.028 07:02:49 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:38.028 ************************************ 00:07:38.028 END TEST env_vtophys 00:07:38.028 ************************************ 00:07:38.288 07:02:49 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:07:38.288 07:02:49 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:38.288 07:02:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.288 07:02:49 env -- common/autotest_common.sh@10 -- # set +x 00:07:38.288 ************************************ 00:07:38.288 START TEST env_pci 00:07:38.288 ************************************ 00:07:38.288 07:02:49 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:07:38.288 00:07:38.288 00:07:38.288 CUnit - A unit testing framework for C - Version 2.1-3 00:07:38.288 http://cunit.sourceforge.net/ 00:07:38.288 00:07:38.288 00:07:38.288 Suite: pci 00:07:38.288 Test: pci_hook ...[2024-11-27 07:02:49.326276] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2150743 has claimed it 00:07:38.288 EAL: Cannot find device (10000:00:01.0) 00:07:38.288 EAL: Failed to attach device on primary process 00:07:38.288 passed 00:07:38.288 00:07:38.288 Run Summary: Type Total Ran Passed Failed Inactive 00:07:38.288 suites 1 1 n/a 0 0 00:07:38.288 tests 1 1 1 0 0 00:07:38.288 asserts 25 25 25 0 n/a 00:07:38.288 00:07:38.288 Elapsed time = 0.032 seconds 00:07:38.288 00:07:38.288 real 0m0.054s 00:07:38.288 user 0m0.014s 00:07:38.288 sys 0m0.040s 00:07:38.288 07:02:49 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.288 07:02:49 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:38.288 ************************************ 00:07:38.288 END TEST env_pci 00:07:38.288 ************************************ 00:07:38.288 07:02:49 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:38.288 07:02:49 env -- env/env.sh@15 -- # uname 00:07:38.288 07:02:49 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:38.288 07:02:49 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:38.288 07:02:49 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:38.288 07:02:49 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:38.288 07:02:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.288 07:02:49 env -- common/autotest_common.sh@10 -- # set +x 00:07:38.288 ************************************ 00:07:38.288 START TEST env_dpdk_post_init 00:07:38.288 ************************************ 00:07:38.288 07:02:49 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:38.288 EAL: Detected CPU lcores: 128 00:07:38.288 EAL: Detected NUMA nodes: 2 00:07:38.288 EAL: Detected shared linkage of DPDK 00:07:38.288 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:38.549 EAL: Selected IOVA mode 'VA' 00:07:38.549 EAL: VFIO support initialized 00:07:38.549 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:38.549 EAL: Using IOMMU type 1 (Type 1) 00:07:38.549 EAL: Ignore mapping IO port bar(1) 00:07:38.810 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:07:38.810 EAL: Ignore mapping IO port bar(1) 00:07:39.071 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:07:39.071 EAL: Ignore mapping IO port bar(1) 00:07:39.071 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:07:39.333 EAL: Ignore mapping IO port bar(1) 00:07:39.333 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:07:39.595 EAL: Ignore mapping IO port bar(1) 00:07:39.595 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:07:39.856 EAL: Ignore mapping IO port bar(1) 00:07:39.856 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:07:40.117 EAL: Ignore mapping IO port bar(1) 00:07:40.117 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:07:40.117 EAL: Ignore mapping IO port bar(1) 00:07:40.378 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:07:40.639 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:07:40.639 EAL: Ignore mapping IO port bar(1) 00:07:40.639 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:07:40.900 EAL: Ignore mapping IO port bar(1) 00:07:40.900 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:07:41.161 EAL: Ignore mapping IO port bar(1) 00:07:41.161 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:07:41.422 EAL: Ignore mapping IO port bar(1) 00:07:41.422 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:07:41.684 EAL: Ignore mapping IO port bar(1) 00:07:41.684 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:07:41.684 EAL: Ignore mapping IO port bar(1) 00:07:41.944 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:07:41.944 EAL: Ignore mapping IO port bar(1) 00:07:42.205 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:07:42.205 EAL: Ignore mapping IO port bar(1) 00:07:42.466 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:07:42.466 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:07:42.466 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:07:42.466 Starting DPDK initialization... 00:07:42.466 Starting SPDK post initialization... 00:07:42.466 SPDK NVMe probe 00:07:42.466 Attaching to 0000:65:00.0 00:07:42.466 Attached to 0000:65:00.0 00:07:42.466 Cleaning up... 00:07:44.382 00:07:44.382 real 0m5.749s 00:07:44.382 user 0m0.115s 00:07:44.382 sys 0m0.189s 00:07:44.382 07:02:55 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.382 07:02:55 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:44.382 ************************************ 00:07:44.382 END TEST env_dpdk_post_init 00:07:44.382 ************************************ 00:07:44.382 07:02:55 env -- env/env.sh@26 -- # uname 00:07:44.382 07:02:55 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:44.382 07:02:55 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:44.382 07:02:55 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:44.382 07:02:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.382 07:02:55 env -- common/autotest_common.sh@10 -- # set +x 00:07:44.382 ************************************ 00:07:44.382 START TEST env_mem_callbacks 00:07:44.382 ************************************ 00:07:44.382 07:02:55 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:44.382 EAL: Detected CPU lcores: 128 00:07:44.382 EAL: Detected NUMA nodes: 2 00:07:44.382 EAL: Detected shared linkage of DPDK 00:07:44.382 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:44.382 EAL: Selected IOVA mode 'VA' 00:07:44.382 EAL: VFIO support initialized 00:07:44.382 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:44.382 00:07:44.382 00:07:44.382 CUnit - A unit testing framework for C - Version 2.1-3 00:07:44.382 http://cunit.sourceforge.net/ 00:07:44.382 00:07:44.382 00:07:44.382 Suite: memory 00:07:44.382 Test: test ... 00:07:44.382 register 0x200000200000 2097152 00:07:44.382 malloc 3145728 00:07:44.382 register 0x200000400000 4194304 00:07:44.382 buf 0x200000500000 len 3145728 PASSED 00:07:44.382 malloc 64 00:07:44.382 buf 0x2000004fff40 len 64 PASSED 00:07:44.382 malloc 4194304 00:07:44.382 register 0x200000800000 6291456 00:07:44.382 buf 0x200000a00000 len 4194304 PASSED 00:07:44.382 free 0x200000500000 3145728 00:07:44.382 free 0x2000004fff40 64 00:07:44.382 unregister 0x200000400000 4194304 PASSED 00:07:44.382 free 0x200000a00000 4194304 00:07:44.382 unregister 0x200000800000 6291456 PASSED 00:07:44.382 malloc 8388608 00:07:44.382 register 0x200000400000 10485760 00:07:44.383 buf 0x200000600000 len 8388608 PASSED 00:07:44.383 free 0x200000600000 8388608 00:07:44.383 unregister 0x200000400000 10485760 PASSED 00:07:44.383 passed 00:07:44.383 00:07:44.383 Run Summary: Type Total Ran Passed Failed Inactive 00:07:44.383 suites 1 1 n/a 0 0 00:07:44.383 tests 1 1 1 0 0 00:07:44.383 asserts 15 15 15 0 n/a 00:07:44.383 00:07:44.383 Elapsed time = 0.010 seconds 00:07:44.383 00:07:44.383 real 0m0.071s 00:07:44.383 user 0m0.022s 00:07:44.383 sys 0m0.049s 00:07:44.383 07:02:55 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.383 07:02:55 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:44.383 ************************************ 00:07:44.383 END TEST env_mem_callbacks 00:07:44.383 ************************************ 00:07:44.383 00:07:44.383 real 0m7.553s 00:07:44.383 user 0m1.055s 00:07:44.383 sys 0m1.057s 00:07:44.383 07:02:55 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.383 07:02:55 env -- common/autotest_common.sh@10 -- # set +x 00:07:44.383 ************************************ 00:07:44.383 END TEST env 00:07:44.383 ************************************ 00:07:44.383 07:02:55 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:07:44.383 07:02:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:44.383 07:02:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.383 07:02:55 -- common/autotest_common.sh@10 -- # set +x 00:07:44.383 ************************************ 00:07:44.383 START TEST rpc 00:07:44.383 ************************************ 00:07:44.383 07:02:55 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:07:44.383 * Looking for test storage... 00:07:44.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:44.645 07:02:55 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:44.645 07:02:55 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:44.645 07:02:55 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:44.645 07:02:55 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:44.645 07:02:55 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:44.645 07:02:55 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:44.645 07:02:55 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:44.645 07:02:55 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:44.645 07:02:55 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:44.645 07:02:55 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:44.645 07:02:55 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:44.645 07:02:55 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:44.645 07:02:55 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:44.645 07:02:55 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:44.645 07:02:55 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:44.645 07:02:55 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:44.645 07:02:55 rpc -- scripts/common.sh@345 -- # : 1 00:07:44.645 07:02:55 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:44.645 07:02:55 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:44.645 07:02:55 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:44.645 07:02:55 rpc -- scripts/common.sh@353 -- # local d=1 00:07:44.645 07:02:55 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:44.645 07:02:55 rpc -- scripts/common.sh@355 -- # echo 1 00:07:44.645 07:02:55 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:44.645 07:02:55 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:44.645 07:02:55 rpc -- scripts/common.sh@353 -- # local d=2 00:07:44.645 07:02:55 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:44.645 07:02:55 rpc -- scripts/common.sh@355 -- # echo 2 00:07:44.645 07:02:55 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:44.645 07:02:55 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:44.645 07:02:55 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:44.645 07:02:55 rpc -- scripts/common.sh@368 -- # return 0 00:07:44.645 07:02:55 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:44.645 07:02:55 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:44.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.645 --rc genhtml_branch_coverage=1 00:07:44.645 --rc genhtml_function_coverage=1 00:07:44.645 --rc genhtml_legend=1 00:07:44.645 --rc geninfo_all_blocks=1 00:07:44.645 --rc geninfo_unexecuted_blocks=1 00:07:44.645 00:07:44.645 ' 00:07:44.645 07:02:55 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:44.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.645 --rc genhtml_branch_coverage=1 00:07:44.645 --rc genhtml_function_coverage=1 00:07:44.645 --rc genhtml_legend=1 00:07:44.645 --rc geninfo_all_blocks=1 00:07:44.645 --rc geninfo_unexecuted_blocks=1 00:07:44.645 00:07:44.645 ' 00:07:44.645 07:02:55 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:44.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.645 --rc genhtml_branch_coverage=1 00:07:44.645 --rc genhtml_function_coverage=1 00:07:44.645 --rc genhtml_legend=1 00:07:44.645 --rc geninfo_all_blocks=1 00:07:44.645 --rc geninfo_unexecuted_blocks=1 00:07:44.645 00:07:44.645 ' 00:07:44.645 07:02:55 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:44.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.645 --rc genhtml_branch_coverage=1 00:07:44.645 --rc genhtml_function_coverage=1 00:07:44.645 --rc genhtml_legend=1 00:07:44.645 --rc geninfo_all_blocks=1 00:07:44.645 --rc geninfo_unexecuted_blocks=1 00:07:44.645 00:07:44.645 ' 00:07:44.645 07:02:55 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2152049 00:07:44.645 07:02:55 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:44.645 07:02:55 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:07:44.645 07:02:55 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2152049 00:07:44.645 07:02:55 rpc -- common/autotest_common.sh@835 -- # '[' -z 2152049 ']' 00:07:44.645 07:02:55 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.645 07:02:55 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.645 07:02:55 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.645 07:02:55 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.645 07:02:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.645 [2024-11-27 07:02:55.749721] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:07:44.645 [2024-11-27 07:02:55.749789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2152049 ] 00:07:44.645 [2024-11-27 07:02:55.842882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.906 [2024-11-27 07:02:55.894965] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:44.906 [2024-11-27 07:02:55.895020] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2152049' to capture a snapshot of events at runtime. 00:07:44.906 [2024-11-27 07:02:55.895029] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:44.906 [2024-11-27 07:02:55.895036] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:44.906 [2024-11-27 07:02:55.895043] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2152049 for offline analysis/debug. 00:07:44.906 [2024-11-27 07:02:55.895785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.478 07:02:56 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.478 07:02:56 rpc -- common/autotest_common.sh@868 -- # return 0 00:07:45.478 07:02:56 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:45.478 07:02:56 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:45.478 07:02:56 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:45.478 07:02:56 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:45.478 07:02:56 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.478 07:02:56 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.478 07:02:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.478 ************************************ 00:07:45.478 START TEST rpc_integrity 00:07:45.478 ************************************ 00:07:45.478 07:02:56 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:45.478 07:02:56 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:45.478 07:02:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.478 07:02:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:45.479 07:02:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.479 07:02:56 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:45.479 07:02:56 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:45.479 07:02:56 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:45.479 07:02:56 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:45.479 07:02:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.479 07:02:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:45.479 07:02:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.479 07:02:56 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:45.479 07:02:56 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:45.479 07:02:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.479 07:02:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:45.740 07:02:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.740 07:02:56 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:45.740 { 00:07:45.740 "name": "Malloc0", 00:07:45.740 "aliases": [ 00:07:45.740 "faa1cdfe-e7db-4a90-8144-7cc48b49fd9c" 00:07:45.740 ], 00:07:45.740 "product_name": "Malloc disk", 00:07:45.740 "block_size": 512, 00:07:45.740 "num_blocks": 16384, 00:07:45.740 "uuid": "faa1cdfe-e7db-4a90-8144-7cc48b49fd9c", 00:07:45.740 "assigned_rate_limits": { 00:07:45.740 "rw_ios_per_sec": 0, 00:07:45.740 "rw_mbytes_per_sec": 0, 00:07:45.740 "r_mbytes_per_sec": 0, 00:07:45.740 "w_mbytes_per_sec": 0 00:07:45.740 }, 00:07:45.740 "claimed": false, 00:07:45.740 "zoned": false, 00:07:45.740 "supported_io_types": { 00:07:45.740 "read": true, 00:07:45.740 "write": true, 00:07:45.740 "unmap": true, 00:07:45.740 "flush": true, 00:07:45.740 "reset": true, 00:07:45.740 "nvme_admin": false, 00:07:45.740 "nvme_io": false, 00:07:45.740 "nvme_io_md": false, 00:07:45.740 "write_zeroes": true, 00:07:45.740 "zcopy": true, 00:07:45.740 "get_zone_info": false, 00:07:45.740 "zone_management": false, 00:07:45.740 "zone_append": false, 00:07:45.740 "compare": false, 00:07:45.740 "compare_and_write": false, 00:07:45.740 "abort": true, 00:07:45.740 "seek_hole": false, 00:07:45.740 "seek_data": false, 00:07:45.740 "copy": true, 00:07:45.740 "nvme_iov_md": false 00:07:45.740 }, 00:07:45.740 "memory_domains": [ 00:07:45.740 { 00:07:45.740 "dma_device_id": "system", 00:07:45.740 "dma_device_type": 1 00:07:45.740 }, 00:07:45.740 { 00:07:45.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.740 "dma_device_type": 2 00:07:45.740 } 00:07:45.740 ], 00:07:45.740 "driver_specific": {} 00:07:45.740 } 00:07:45.740 ]' 00:07:45.740 07:02:56 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:45.740 07:02:56 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:45.740 07:02:56 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:45.740 07:02:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.740 07:02:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:45.740 [2024-11-27 07:02:56.746273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:45.740 [2024-11-27 07:02:56.746322] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:45.740 [2024-11-27 07:02:56.746340] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9f5ae0 00:07:45.740 [2024-11-27 07:02:56.746348] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:45.740 [2024-11-27 07:02:56.747909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:45.740 [2024-11-27 07:02:56.747945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:45.740 Passthru0 00:07:45.740 07:02:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.740 07:02:56 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:45.740 07:02:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.740 07:02:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:45.740 07:02:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.740 07:02:56 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:45.740 { 00:07:45.740 "name": "Malloc0", 00:07:45.740 "aliases": [ 00:07:45.740 "faa1cdfe-e7db-4a90-8144-7cc48b49fd9c" 00:07:45.740 ], 00:07:45.740 "product_name": "Malloc disk", 00:07:45.740 "block_size": 512, 00:07:45.740 "num_blocks": 16384, 00:07:45.740 "uuid": "faa1cdfe-e7db-4a90-8144-7cc48b49fd9c", 00:07:45.740 "assigned_rate_limits": { 00:07:45.740 "rw_ios_per_sec": 0, 00:07:45.740 "rw_mbytes_per_sec": 0, 00:07:45.740 "r_mbytes_per_sec": 0, 00:07:45.740 "w_mbytes_per_sec": 0 00:07:45.740 }, 00:07:45.740 "claimed": true, 00:07:45.740 "claim_type": "exclusive_write", 00:07:45.740 "zoned": false, 00:07:45.740 "supported_io_types": { 00:07:45.740 "read": true, 00:07:45.740 "write": true, 00:07:45.740 "unmap": true, 00:07:45.740 "flush": true, 00:07:45.740 "reset": true, 00:07:45.740 "nvme_admin": false, 00:07:45.740 "nvme_io": false, 00:07:45.740 "nvme_io_md": false, 00:07:45.740 "write_zeroes": true, 00:07:45.740 "zcopy": true, 00:07:45.740 "get_zone_info": false, 00:07:45.740 "zone_management": false, 00:07:45.740 "zone_append": false, 00:07:45.740 "compare": false, 00:07:45.740 "compare_and_write": false, 00:07:45.740 "abort": true, 00:07:45.740 "seek_hole": false, 00:07:45.740 "seek_data": false, 00:07:45.740 "copy": true, 00:07:45.740 "nvme_iov_md": false 00:07:45.740 }, 00:07:45.740 "memory_domains": [ 00:07:45.740 { 00:07:45.740 "dma_device_id": "system", 00:07:45.740 "dma_device_type": 1 00:07:45.740 }, 00:07:45.740 { 00:07:45.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.740 "dma_device_type": 2 00:07:45.740 } 00:07:45.740 ], 00:07:45.740 "driver_specific": {} 00:07:45.740 }, 00:07:45.740 { 00:07:45.740 "name": "Passthru0", 00:07:45.740 "aliases": [ 00:07:45.740 "9b81aa91-d695-5d8c-94f4-fd4bde45ab4c" 00:07:45.740 ], 00:07:45.740 "product_name": "passthru", 00:07:45.740 "block_size": 512, 00:07:45.740 "num_blocks": 16384, 00:07:45.740 "uuid": "9b81aa91-d695-5d8c-94f4-fd4bde45ab4c", 00:07:45.740 "assigned_rate_limits": { 00:07:45.740 "rw_ios_per_sec": 0, 00:07:45.740 "rw_mbytes_per_sec": 0, 00:07:45.740 "r_mbytes_per_sec": 0, 00:07:45.740 "w_mbytes_per_sec": 0 00:07:45.740 }, 00:07:45.740 "claimed": false, 00:07:45.740 "zoned": false, 00:07:45.740 "supported_io_types": { 00:07:45.740 "read": true, 00:07:45.740 "write": true, 00:07:45.740 "unmap": true, 00:07:45.740 "flush": true, 00:07:45.740 "reset": true, 00:07:45.740 "nvme_admin": false, 00:07:45.740 "nvme_io": false, 00:07:45.740 "nvme_io_md": false, 00:07:45.740 "write_zeroes": true, 00:07:45.740 "zcopy": true, 00:07:45.740 "get_zone_info": false, 00:07:45.740 "zone_management": false, 00:07:45.740 "zone_append": false, 00:07:45.740 "compare": false, 00:07:45.740 "compare_and_write": false, 00:07:45.740 "abort": true, 00:07:45.740 "seek_hole": false, 00:07:45.740 "seek_data": false, 00:07:45.740 "copy": true, 00:07:45.740 "nvme_iov_md": false 00:07:45.740 }, 00:07:45.740 "memory_domains": [ 00:07:45.740 { 00:07:45.740 "dma_device_id": "system", 00:07:45.740 "dma_device_type": 1 00:07:45.740 }, 00:07:45.740 { 00:07:45.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.740 "dma_device_type": 2 00:07:45.740 } 00:07:45.740 ], 00:07:45.740 "driver_specific": { 00:07:45.740 "passthru": { 00:07:45.740 "name": "Passthru0", 00:07:45.740 "base_bdev_name": "Malloc0" 00:07:45.740 } 00:07:45.740 } 00:07:45.740 } 00:07:45.740 ]' 00:07:45.740 07:02:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:45.740 07:02:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:45.740 07:02:56 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:45.740 07:02:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.740 07:02:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:45.740 07:02:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.740 07:02:56 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:45.740 07:02:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.740 07:02:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:45.740 07:02:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.740 07:02:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:45.740 07:02:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.740 07:02:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:45.740 07:02:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.740 07:02:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:45.740 07:02:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:45.740 07:02:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:45.740 00:07:45.740 real 0m0.304s 00:07:45.740 user 0m0.195s 00:07:45.740 sys 0m0.042s 00:07:45.740 07:02:56 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.740 07:02:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:45.740 ************************************ 00:07:45.740 END TEST rpc_integrity 00:07:45.740 ************************************ 00:07:46.001 07:02:56 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:46.001 07:02:56 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:46.001 07:02:56 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.001 07:02:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.001 ************************************ 00:07:46.001 START TEST rpc_plugins 00:07:46.001 ************************************ 00:07:46.001 07:02:56 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:07:46.001 07:02:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:46.001 07:02:56 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.001 07:02:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:46.001 07:02:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.001 07:02:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:46.001 07:02:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:46.001 07:02:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.001 07:02:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:46.001 07:02:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.001 07:02:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:46.001 { 00:07:46.001 "name": "Malloc1", 00:07:46.001 "aliases": [ 00:07:46.001 "01d3c171-04cc-4c8c-95f1-b17208f1f665" 00:07:46.001 ], 00:07:46.001 "product_name": "Malloc disk", 00:07:46.001 "block_size": 4096, 00:07:46.001 "num_blocks": 256, 00:07:46.001 "uuid": "01d3c171-04cc-4c8c-95f1-b17208f1f665", 00:07:46.001 "assigned_rate_limits": { 00:07:46.001 "rw_ios_per_sec": 0, 00:07:46.001 "rw_mbytes_per_sec": 0, 00:07:46.001 "r_mbytes_per_sec": 0, 00:07:46.001 "w_mbytes_per_sec": 0 00:07:46.001 }, 00:07:46.001 "claimed": false, 00:07:46.001 "zoned": false, 00:07:46.001 "supported_io_types": { 00:07:46.001 "read": true, 00:07:46.001 "write": true, 00:07:46.001 "unmap": true, 00:07:46.001 "flush": true, 00:07:46.001 "reset": true, 00:07:46.001 "nvme_admin": false, 00:07:46.001 "nvme_io": false, 00:07:46.001 "nvme_io_md": false, 00:07:46.001 "write_zeroes": true, 00:07:46.001 "zcopy": true, 00:07:46.001 "get_zone_info": false, 00:07:46.002 "zone_management": false, 00:07:46.002 "zone_append": false, 00:07:46.002 "compare": false, 00:07:46.002 "compare_and_write": false, 00:07:46.002 "abort": true, 00:07:46.002 "seek_hole": false, 00:07:46.002 "seek_data": false, 00:07:46.002 "copy": true, 00:07:46.002 "nvme_iov_md": false 00:07:46.002 }, 00:07:46.002 "memory_domains": [ 00:07:46.002 { 00:07:46.002 "dma_device_id": "system", 00:07:46.002 "dma_device_type": 1 00:07:46.002 }, 00:07:46.002 { 00:07:46.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.002 "dma_device_type": 2 00:07:46.002 } 00:07:46.002 ], 00:07:46.002 "driver_specific": {} 00:07:46.002 } 00:07:46.002 ]' 00:07:46.002 07:02:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:46.002 07:02:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:46.002 07:02:57 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:46.002 07:02:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.002 07:02:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:46.002 07:02:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.002 07:02:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:46.002 07:02:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.002 07:02:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:46.002 07:02:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.002 07:02:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:46.002 07:02:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:46.002 07:02:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:46.002 00:07:46.002 real 0m0.152s 00:07:46.002 user 0m0.099s 00:07:46.002 sys 0m0.017s 00:07:46.002 07:02:57 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.002 07:02:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:46.002 ************************************ 00:07:46.002 END TEST rpc_plugins 00:07:46.002 ************************************ 00:07:46.002 07:02:57 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:46.002 07:02:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:46.002 07:02:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.002 07:02:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.263 ************************************ 00:07:46.263 START TEST rpc_trace_cmd_test 00:07:46.263 ************************************ 00:07:46.263 07:02:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:07:46.263 07:02:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:46.263 07:02:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:46.263 07:02:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.263 07:02:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.263 07:02:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.263 07:02:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:46.263 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2152049", 00:07:46.263 "tpoint_group_mask": "0x8", 00:07:46.263 "iscsi_conn": { 00:07:46.263 "mask": "0x2", 00:07:46.263 "tpoint_mask": "0x0" 00:07:46.263 }, 00:07:46.263 "scsi": { 00:07:46.263 "mask": "0x4", 00:07:46.263 "tpoint_mask": "0x0" 00:07:46.263 }, 00:07:46.263 "bdev": { 00:07:46.263 "mask": "0x8", 00:07:46.263 "tpoint_mask": "0xffffffffffffffff" 00:07:46.263 }, 00:07:46.263 "nvmf_rdma": { 00:07:46.263 "mask": "0x10", 00:07:46.263 "tpoint_mask": "0x0" 00:07:46.263 }, 00:07:46.263 "nvmf_tcp": { 00:07:46.263 "mask": "0x20", 00:07:46.263 "tpoint_mask": "0x0" 00:07:46.263 }, 00:07:46.263 "ftl": { 00:07:46.263 "mask": "0x40", 00:07:46.263 "tpoint_mask": "0x0" 00:07:46.263 }, 00:07:46.263 "blobfs": { 00:07:46.263 "mask": "0x80", 00:07:46.263 "tpoint_mask": "0x0" 00:07:46.263 }, 00:07:46.263 "dsa": { 00:07:46.263 "mask": "0x200", 00:07:46.263 "tpoint_mask": "0x0" 00:07:46.263 }, 00:07:46.263 "thread": { 00:07:46.263 "mask": "0x400", 00:07:46.263 "tpoint_mask": "0x0" 00:07:46.263 }, 00:07:46.263 "nvme_pcie": { 00:07:46.263 "mask": "0x800", 00:07:46.263 "tpoint_mask": "0x0" 00:07:46.263 }, 00:07:46.263 "iaa": { 00:07:46.263 "mask": "0x1000", 00:07:46.263 "tpoint_mask": "0x0" 00:07:46.263 }, 00:07:46.263 "nvme_tcp": { 00:07:46.263 "mask": "0x2000", 00:07:46.263 "tpoint_mask": "0x0" 00:07:46.263 }, 00:07:46.263 "bdev_nvme": { 00:07:46.263 "mask": "0x4000", 00:07:46.263 "tpoint_mask": "0x0" 00:07:46.263 }, 00:07:46.263 "sock": { 00:07:46.263 "mask": "0x8000", 00:07:46.263 "tpoint_mask": "0x0" 00:07:46.263 }, 00:07:46.263 "blob": { 00:07:46.263 "mask": "0x10000", 00:07:46.263 "tpoint_mask": "0x0" 00:07:46.263 }, 00:07:46.263 "bdev_raid": { 00:07:46.263 "mask": "0x20000", 00:07:46.263 "tpoint_mask": "0x0" 00:07:46.263 }, 00:07:46.263 "scheduler": { 00:07:46.263 "mask": "0x40000", 00:07:46.263 "tpoint_mask": "0x0" 00:07:46.263 } 00:07:46.263 }' 00:07:46.263 07:02:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:46.263 07:02:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:07:46.263 07:02:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:46.263 07:02:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:46.263 07:02:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:46.263 07:02:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:46.263 07:02:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:46.263 07:02:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:46.263 07:02:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:46.524 07:02:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:46.524 00:07:46.524 real 0m0.252s 00:07:46.524 user 0m0.203s 00:07:46.524 sys 0m0.040s 00:07:46.524 07:02:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.524 07:02:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.524 ************************************ 00:07:46.524 END TEST rpc_trace_cmd_test 00:07:46.524 ************************************ 00:07:46.524 07:02:57 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:46.524 07:02:57 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:46.524 07:02:57 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:46.524 07:02:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:46.524 07:02:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.524 07:02:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.524 ************************************ 00:07:46.524 START TEST rpc_daemon_integrity 00:07:46.524 ************************************ 00:07:46.524 07:02:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:46.524 07:02:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:46.524 07:02:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.524 07:02:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:46.524 07:02:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.524 07:02:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:46.524 07:02:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:46.524 07:02:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:46.524 07:02:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:46.524 07:02:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.524 07:02:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:46.524 07:02:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.524 07:02:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:46.524 07:02:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:46.524 07:02:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.524 07:02:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:46.524 07:02:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.524 07:02:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:46.524 { 00:07:46.524 "name": "Malloc2", 00:07:46.524 "aliases": [ 00:07:46.524 "db871e3a-be85-49bd-8b60-39c8cb69cc9f" 00:07:46.524 ], 00:07:46.524 "product_name": "Malloc disk", 00:07:46.524 "block_size": 512, 00:07:46.525 "num_blocks": 16384, 00:07:46.525 "uuid": "db871e3a-be85-49bd-8b60-39c8cb69cc9f", 00:07:46.525 "assigned_rate_limits": { 00:07:46.525 "rw_ios_per_sec": 0, 00:07:46.525 "rw_mbytes_per_sec": 0, 00:07:46.525 "r_mbytes_per_sec": 0, 00:07:46.525 "w_mbytes_per_sec": 0 00:07:46.525 }, 00:07:46.525 "claimed": false, 00:07:46.525 "zoned": false, 00:07:46.525 "supported_io_types": { 00:07:46.525 "read": true, 00:07:46.525 "write": true, 00:07:46.525 "unmap": true, 00:07:46.525 "flush": true, 00:07:46.525 "reset": true, 00:07:46.525 "nvme_admin": false, 00:07:46.525 "nvme_io": false, 00:07:46.525 "nvme_io_md": false, 00:07:46.525 "write_zeroes": true, 00:07:46.525 "zcopy": true, 00:07:46.525 "get_zone_info": false, 00:07:46.525 "zone_management": false, 00:07:46.525 "zone_append": false, 00:07:46.525 "compare": false, 00:07:46.525 "compare_and_write": false, 00:07:46.525 "abort": true, 00:07:46.525 "seek_hole": false, 00:07:46.525 "seek_data": false, 00:07:46.525 "copy": true, 00:07:46.525 "nvme_iov_md": false 00:07:46.525 }, 00:07:46.525 "memory_domains": [ 00:07:46.525 { 00:07:46.525 "dma_device_id": "system", 00:07:46.525 "dma_device_type": 1 00:07:46.525 }, 00:07:46.525 { 00:07:46.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.525 "dma_device_type": 2 00:07:46.525 } 00:07:46.525 ], 00:07:46.525 "driver_specific": {} 00:07:46.525 } 00:07:46.525 ]' 00:07:46.525 07:02:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:46.525 07:02:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:46.525 07:02:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:46.525 07:02:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.525 07:02:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:46.525 [2024-11-27 07:02:57.684790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:46.525 [2024-11-27 07:02:57.684834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.525 [2024-11-27 07:02:57.684854] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9f6040 00:07:46.525 [2024-11-27 07:02:57.684861] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.525 [2024-11-27 07:02:57.686320] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.525 [2024-11-27 07:02:57.686355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:46.525 Passthru0 00:07:46.525 07:02:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.525 07:02:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:46.525 07:02:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.525 07:02:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:46.525 07:02:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.525 07:02:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:46.525 { 00:07:46.525 "name": "Malloc2", 00:07:46.525 "aliases": [ 00:07:46.525 "db871e3a-be85-49bd-8b60-39c8cb69cc9f" 00:07:46.525 ], 00:07:46.525 "product_name": "Malloc disk", 00:07:46.525 "block_size": 512, 00:07:46.525 "num_blocks": 16384, 00:07:46.525 "uuid": "db871e3a-be85-49bd-8b60-39c8cb69cc9f", 00:07:46.525 "assigned_rate_limits": { 00:07:46.525 "rw_ios_per_sec": 0, 00:07:46.525 "rw_mbytes_per_sec": 0, 00:07:46.525 "r_mbytes_per_sec": 0, 00:07:46.525 "w_mbytes_per_sec": 0 00:07:46.525 }, 00:07:46.525 "claimed": true, 00:07:46.525 "claim_type": "exclusive_write", 00:07:46.525 "zoned": false, 00:07:46.525 "supported_io_types": { 00:07:46.525 "read": true, 00:07:46.525 "write": true, 00:07:46.525 "unmap": true, 00:07:46.525 "flush": true, 00:07:46.525 "reset": true, 00:07:46.525 "nvme_admin": false, 00:07:46.525 "nvme_io": false, 00:07:46.525 "nvme_io_md": false, 00:07:46.525 "write_zeroes": true, 00:07:46.525 "zcopy": true, 00:07:46.525 "get_zone_info": false, 00:07:46.525 "zone_management": false, 00:07:46.525 "zone_append": false, 00:07:46.525 "compare": false, 00:07:46.525 "compare_and_write": false, 00:07:46.525 "abort": true, 00:07:46.525 "seek_hole": false, 00:07:46.525 "seek_data": false, 00:07:46.525 "copy": true, 00:07:46.525 "nvme_iov_md": false 00:07:46.525 }, 00:07:46.525 "memory_domains": [ 00:07:46.525 { 00:07:46.525 "dma_device_id": "system", 00:07:46.525 "dma_device_type": 1 00:07:46.525 }, 00:07:46.525 { 00:07:46.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.525 "dma_device_type": 2 00:07:46.525 } 00:07:46.525 ], 00:07:46.525 "driver_specific": {} 00:07:46.525 }, 00:07:46.525 { 00:07:46.525 "name": "Passthru0", 00:07:46.525 "aliases": [ 00:07:46.525 "7aac222b-706c-55cb-9eef-40df6fd18bef" 00:07:46.525 ], 00:07:46.525 "product_name": "passthru", 00:07:46.525 "block_size": 512, 00:07:46.525 "num_blocks": 16384, 00:07:46.525 "uuid": "7aac222b-706c-55cb-9eef-40df6fd18bef", 00:07:46.525 "assigned_rate_limits": { 00:07:46.525 "rw_ios_per_sec": 0, 00:07:46.525 "rw_mbytes_per_sec": 0, 00:07:46.525 "r_mbytes_per_sec": 0, 00:07:46.525 "w_mbytes_per_sec": 0 00:07:46.525 }, 00:07:46.525 "claimed": false, 00:07:46.525 "zoned": false, 00:07:46.526 "supported_io_types": { 00:07:46.526 "read": true, 00:07:46.526 "write": true, 00:07:46.526 "unmap": true, 00:07:46.526 "flush": true, 00:07:46.526 "reset": true, 00:07:46.526 "nvme_admin": false, 00:07:46.526 "nvme_io": false, 00:07:46.526 "nvme_io_md": false, 00:07:46.526 "write_zeroes": true, 00:07:46.526 "zcopy": true, 00:07:46.526 "get_zone_info": false, 00:07:46.526 "zone_management": false, 00:07:46.526 "zone_append": false, 00:07:46.526 "compare": false, 00:07:46.526 "compare_and_write": false, 00:07:46.526 "abort": true, 00:07:46.526 "seek_hole": false, 00:07:46.526 "seek_data": false, 00:07:46.526 "copy": true, 00:07:46.526 "nvme_iov_md": false 00:07:46.526 }, 00:07:46.526 "memory_domains": [ 00:07:46.526 { 00:07:46.526 "dma_device_id": "system", 00:07:46.526 "dma_device_type": 1 00:07:46.526 }, 00:07:46.526 { 00:07:46.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.526 "dma_device_type": 2 00:07:46.526 } 00:07:46.526 ], 00:07:46.526 "driver_specific": { 00:07:46.526 "passthru": { 00:07:46.526 "name": "Passthru0", 00:07:46.526 "base_bdev_name": "Malloc2" 00:07:46.526 } 00:07:46.526 } 00:07:46.526 } 00:07:46.526 ]' 00:07:46.526 07:02:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:46.786 07:02:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:46.786 07:02:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:46.786 07:02:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.786 07:02:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:46.786 07:02:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.786 07:02:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:46.786 07:02:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.786 07:02:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:46.786 07:02:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.786 07:02:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:46.786 07:02:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.786 07:02:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:46.786 07:02:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.786 07:02:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:46.786 07:02:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:46.786 07:02:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:46.786 00:07:46.786 real 0m0.304s 00:07:46.786 user 0m0.182s 00:07:46.786 sys 0m0.053s 00:07:46.786 07:02:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.786 07:02:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:46.786 ************************************ 00:07:46.786 END TEST rpc_daemon_integrity 00:07:46.786 ************************************ 00:07:46.786 07:02:57 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:46.786 07:02:57 rpc -- rpc/rpc.sh@84 -- # killprocess 2152049 00:07:46.786 07:02:57 rpc -- common/autotest_common.sh@954 -- # '[' -z 2152049 ']' 00:07:46.786 07:02:57 rpc -- common/autotest_common.sh@958 -- # kill -0 2152049 00:07:46.786 07:02:57 rpc -- common/autotest_common.sh@959 -- # uname 00:07:46.786 07:02:57 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:46.786 07:02:57 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2152049 00:07:46.786 07:02:57 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:46.786 07:02:57 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:46.786 07:02:57 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2152049' 00:07:46.786 killing process with pid 2152049 00:07:46.786 07:02:57 rpc -- common/autotest_common.sh@973 -- # kill 2152049 00:07:46.786 07:02:57 rpc -- common/autotest_common.sh@978 -- # wait 2152049 00:07:47.046 00:07:47.046 real 0m2.704s 00:07:47.046 user 0m3.462s 00:07:47.046 sys 0m0.817s 00:07:47.046 07:02:58 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.046 07:02:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.046 ************************************ 00:07:47.046 END TEST rpc 00:07:47.046 ************************************ 00:07:47.046 07:02:58 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:47.046 07:02:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:47.046 07:02:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.046 07:02:58 -- common/autotest_common.sh@10 -- # set +x 00:07:47.305 ************************************ 00:07:47.305 START TEST skip_rpc 00:07:47.305 ************************************ 00:07:47.305 07:02:58 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:47.305 * Looking for test storage... 00:07:47.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:47.305 07:02:58 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:47.305 07:02:58 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:47.305 07:02:58 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:47.305 07:02:58 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:47.305 07:02:58 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:47.305 07:02:58 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:47.305 07:02:58 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:47.305 07:02:58 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:47.305 07:02:58 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:47.306 07:02:58 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:47.306 07:02:58 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:47.306 07:02:58 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:47.306 07:02:58 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:47.306 07:02:58 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:47.306 07:02:58 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:47.306 07:02:58 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:47.306 07:02:58 skip_rpc -- scripts/common.sh@345 -- # : 1 00:07:47.306 07:02:58 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:47.306 07:02:58 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:47.306 07:02:58 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:47.306 07:02:58 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:07:47.306 07:02:58 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:47.306 07:02:58 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:07:47.306 07:02:58 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:47.306 07:02:58 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:47.306 07:02:58 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:07:47.306 07:02:58 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:47.306 07:02:58 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:07:47.306 07:02:58 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:47.306 07:02:58 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:47.306 07:02:58 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:47.306 07:02:58 skip_rpc -- scripts/common.sh@368 -- # return 0 00:07:47.306 07:02:58 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:47.306 07:02:58 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:47.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.306 --rc genhtml_branch_coverage=1 00:07:47.306 --rc genhtml_function_coverage=1 00:07:47.306 --rc genhtml_legend=1 00:07:47.306 --rc geninfo_all_blocks=1 00:07:47.306 --rc geninfo_unexecuted_blocks=1 00:07:47.306 00:07:47.306 ' 00:07:47.306 07:02:58 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:47.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.306 --rc genhtml_branch_coverage=1 00:07:47.306 --rc genhtml_function_coverage=1 00:07:47.306 --rc genhtml_legend=1 00:07:47.306 --rc geninfo_all_blocks=1 00:07:47.306 --rc geninfo_unexecuted_blocks=1 00:07:47.306 00:07:47.306 ' 00:07:47.306 07:02:58 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:47.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.306 --rc genhtml_branch_coverage=1 00:07:47.306 --rc genhtml_function_coverage=1 00:07:47.306 --rc genhtml_legend=1 00:07:47.306 --rc geninfo_all_blocks=1 00:07:47.306 --rc geninfo_unexecuted_blocks=1 00:07:47.306 00:07:47.306 ' 00:07:47.306 07:02:58 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:47.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.306 --rc genhtml_branch_coverage=1 00:07:47.306 --rc genhtml_function_coverage=1 00:07:47.306 --rc genhtml_legend=1 00:07:47.306 --rc geninfo_all_blocks=1 00:07:47.306 --rc geninfo_unexecuted_blocks=1 00:07:47.306 00:07:47.306 ' 00:07:47.306 07:02:58 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:47.306 07:02:58 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:47.306 07:02:58 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:47.306 07:02:58 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:47.306 07:02:58 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.306 07:02:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.566 ************************************ 00:07:47.566 START TEST skip_rpc 00:07:47.566 ************************************ 00:07:47.566 07:02:58 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:07:47.566 07:02:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2152899 00:07:47.566 07:02:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:47.566 07:02:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:47.566 07:02:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:47.566 [2024-11-27 07:02:58.584679] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:07:47.566 [2024-11-27 07:02:58.584738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2152899 ] 00:07:47.566 [2024-11-27 07:02:58.647480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.566 [2024-11-27 07:02:58.694350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.847 07:03:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:52.847 07:03:03 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:52.847 07:03:03 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:52.847 07:03:03 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:52.847 07:03:03 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.847 07:03:03 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:52.847 07:03:03 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.847 07:03:03 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:07:52.847 07:03:03 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.847 07:03:03 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.847 07:03:03 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:52.847 07:03:03 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:52.847 07:03:03 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:52.847 07:03:03 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:52.847 07:03:03 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:52.847 07:03:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:52.847 07:03:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2152899 00:07:52.847 07:03:03 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2152899 ']' 00:07:52.847 07:03:03 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2152899 00:07:52.847 07:03:03 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:07:52.847 07:03:03 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:52.847 07:03:03 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2152899 00:07:52.847 07:03:03 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:52.847 07:03:03 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:52.847 07:03:03 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2152899' 00:07:52.847 killing process with pid 2152899 00:07:52.847 07:03:03 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2152899 00:07:52.847 07:03:03 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2152899 00:07:52.847 00:07:52.847 real 0m5.268s 00:07:52.847 user 0m5.035s 00:07:52.847 sys 0m0.273s 00:07:52.847 07:03:03 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.847 07:03:03 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.847 ************************************ 00:07:52.847 END TEST skip_rpc 00:07:52.847 ************************************ 00:07:52.847 07:03:03 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:52.847 07:03:03 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.847 07:03:03 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.847 07:03:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.847 ************************************ 00:07:52.847 START TEST skip_rpc_with_json 00:07:52.847 ************************************ 00:07:52.847 07:03:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:07:52.847 07:03:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:52.847 07:03:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2154040 00:07:52.847 07:03:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:52.847 07:03:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:52.847 07:03:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2154040 00:07:52.847 07:03:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2154040 ']' 00:07:52.847 07:03:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.847 07:03:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.847 07:03:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.847 07:03:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.847 07:03:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:52.847 [2024-11-27 07:03:03.920099] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:07:52.847 [2024-11-27 07:03:03.920157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2154040 ] 00:07:52.847 [2024-11-27 07:03:04.007953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.847 [2024-11-27 07:03:04.046909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.788 07:03:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.788 07:03:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:07:53.788 07:03:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:53.789 07:03:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.789 07:03:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:53.789 [2024-11-27 07:03:04.728829] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:53.789 request: 00:07:53.789 { 00:07:53.789 "trtype": "tcp", 00:07:53.789 "method": "nvmf_get_transports", 00:07:53.789 "req_id": 1 00:07:53.789 } 00:07:53.789 Got JSON-RPC error response 00:07:53.789 response: 00:07:53.789 { 00:07:53.789 "code": -19, 00:07:53.789 "message": "No such device" 00:07:53.789 } 00:07:53.789 07:03:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:53.789 07:03:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:53.789 07:03:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.789 07:03:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:53.789 [2024-11-27 07:03:04.740943] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.789 07:03:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.789 07:03:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:53.789 07:03:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.789 07:03:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:53.789 07:03:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.789 07:03:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:53.789 { 00:07:53.789 "subsystems": [ 00:07:53.789 { 00:07:53.789 "subsystem": "fsdev", 00:07:53.789 "config": [ 00:07:53.789 { 00:07:53.789 "method": "fsdev_set_opts", 00:07:53.789 "params": { 00:07:53.789 "fsdev_io_pool_size": 65535, 00:07:53.789 "fsdev_io_cache_size": 256 00:07:53.789 } 00:07:53.789 } 00:07:53.789 ] 00:07:53.789 }, 00:07:53.789 { 00:07:53.789 "subsystem": "vfio_user_target", 00:07:53.789 "config": null 00:07:53.789 }, 00:07:53.789 { 00:07:53.789 "subsystem": "keyring", 00:07:53.789 "config": [] 00:07:53.789 }, 00:07:53.789 { 00:07:53.789 "subsystem": "iobuf", 00:07:53.789 "config": [ 00:07:53.789 { 00:07:53.789 "method": "iobuf_set_options", 00:07:53.789 "params": { 00:07:53.789 "small_pool_count": 8192, 00:07:53.789 "large_pool_count": 1024, 00:07:53.789 "small_bufsize": 8192, 00:07:53.789 "large_bufsize": 135168, 00:07:53.789 "enable_numa": false 00:07:53.789 } 00:07:53.789 } 00:07:53.789 ] 00:07:53.789 }, 00:07:53.789 { 00:07:53.789 "subsystem": "sock", 00:07:53.789 "config": [ 00:07:53.789 { 00:07:53.789 "method": "sock_set_default_impl", 00:07:53.789 "params": { 00:07:53.789 "impl_name": "posix" 00:07:53.789 } 00:07:53.789 }, 00:07:53.789 { 00:07:53.789 "method": "sock_impl_set_options", 00:07:53.789 "params": { 00:07:53.789 "impl_name": "ssl", 00:07:53.789 "recv_buf_size": 4096, 00:07:53.789 "send_buf_size": 4096, 00:07:53.789 "enable_recv_pipe": true, 00:07:53.789 "enable_quickack": false, 00:07:53.789 "enable_placement_id": 0, 00:07:53.789 "enable_zerocopy_send_server": true, 00:07:53.789 "enable_zerocopy_send_client": false, 00:07:53.789 "zerocopy_threshold": 0, 00:07:53.789 "tls_version": 0, 00:07:53.789 "enable_ktls": false 00:07:53.789 } 00:07:53.789 }, 00:07:53.789 { 00:07:53.789 "method": "sock_impl_set_options", 00:07:53.789 "params": { 00:07:53.789 "impl_name": "posix", 00:07:53.789 "recv_buf_size": 2097152, 00:07:53.789 "send_buf_size": 2097152, 00:07:53.789 "enable_recv_pipe": true, 00:07:53.789 "enable_quickack": false, 00:07:53.789 "enable_placement_id": 0, 00:07:53.789 "enable_zerocopy_send_server": true, 00:07:53.789 "enable_zerocopy_send_client": false, 00:07:53.789 "zerocopy_threshold": 0, 00:07:53.789 "tls_version": 0, 00:07:53.789 "enable_ktls": false 00:07:53.789 } 00:07:53.789 } 00:07:53.789 ] 00:07:53.789 }, 00:07:53.789 { 00:07:53.789 "subsystem": "vmd", 00:07:53.789 "config": [] 00:07:53.789 }, 00:07:53.789 { 00:07:53.789 "subsystem": "accel", 00:07:53.789 "config": [ 00:07:53.789 { 00:07:53.789 "method": "accel_set_options", 00:07:53.789 "params": { 00:07:53.789 "small_cache_size": 128, 00:07:53.789 "large_cache_size": 16, 00:07:53.789 "task_count": 2048, 00:07:53.789 "sequence_count": 2048, 00:07:53.789 "buf_count": 2048 00:07:53.789 } 00:07:53.789 } 00:07:53.789 ] 00:07:53.789 }, 00:07:53.789 { 00:07:53.789 "subsystem": "bdev", 00:07:53.789 "config": [ 00:07:53.789 { 00:07:53.789 "method": "bdev_set_options", 00:07:53.789 "params": { 00:07:53.789 "bdev_io_pool_size": 65535, 00:07:53.789 "bdev_io_cache_size": 256, 00:07:53.789 "bdev_auto_examine": true, 00:07:53.789 "iobuf_small_cache_size": 128, 00:07:53.789 "iobuf_large_cache_size": 16 00:07:53.789 } 00:07:53.789 }, 00:07:53.789 { 00:07:53.789 "method": "bdev_raid_set_options", 00:07:53.789 "params": { 00:07:53.789 "process_window_size_kb": 1024, 00:07:53.789 "process_max_bandwidth_mb_sec": 0 00:07:53.789 } 00:07:53.789 }, 00:07:53.789 { 00:07:53.789 "method": "bdev_iscsi_set_options", 00:07:53.789 "params": { 00:07:53.789 "timeout_sec": 30 00:07:53.789 } 00:07:53.789 }, 00:07:53.789 { 00:07:53.789 "method": "bdev_nvme_set_options", 00:07:53.789 "params": { 00:07:53.789 "action_on_timeout": "none", 00:07:53.789 "timeout_us": 0, 00:07:53.789 "timeout_admin_us": 0, 00:07:53.789 "keep_alive_timeout_ms": 10000, 00:07:53.789 "arbitration_burst": 0, 00:07:53.789 "low_priority_weight": 0, 00:07:53.789 "medium_priority_weight": 0, 00:07:53.789 "high_priority_weight": 0, 00:07:53.789 "nvme_adminq_poll_period_us": 10000, 00:07:53.789 "nvme_ioq_poll_period_us": 0, 00:07:53.789 "io_queue_requests": 0, 00:07:53.789 "delay_cmd_submit": true, 00:07:53.789 "transport_retry_count": 4, 00:07:53.789 "bdev_retry_count": 3, 00:07:53.789 "transport_ack_timeout": 0, 00:07:53.789 "ctrlr_loss_timeout_sec": 0, 00:07:53.789 "reconnect_delay_sec": 0, 00:07:53.789 "fast_io_fail_timeout_sec": 0, 00:07:53.789 "disable_auto_failback": false, 00:07:53.789 "generate_uuids": false, 00:07:53.789 "transport_tos": 0, 00:07:53.789 "nvme_error_stat": false, 00:07:53.789 "rdma_srq_size": 0, 00:07:53.789 "io_path_stat": false, 00:07:53.789 "allow_accel_sequence": false, 00:07:53.789 "rdma_max_cq_size": 0, 00:07:53.789 "rdma_cm_event_timeout_ms": 0, 00:07:53.789 "dhchap_digests": [ 00:07:53.789 "sha256", 00:07:53.789 "sha384", 00:07:53.789 "sha512" 00:07:53.789 ], 00:07:53.789 "dhchap_dhgroups": [ 00:07:53.789 "null", 00:07:53.789 "ffdhe2048", 00:07:53.789 "ffdhe3072", 00:07:53.789 "ffdhe4096", 00:07:53.789 "ffdhe6144", 00:07:53.789 "ffdhe8192" 00:07:53.789 ] 00:07:53.789 } 00:07:53.789 }, 00:07:53.789 { 00:07:53.789 "method": "bdev_nvme_set_hotplug", 00:07:53.789 "params": { 00:07:53.789 "period_us": 100000, 00:07:53.789 "enable": false 00:07:53.789 } 00:07:53.789 }, 00:07:53.789 { 00:07:53.789 "method": "bdev_wait_for_examine" 00:07:53.789 } 00:07:53.789 ] 00:07:53.789 }, 00:07:53.789 { 00:07:53.789 "subsystem": "scsi", 00:07:53.789 "config": null 00:07:53.789 }, 00:07:53.789 { 00:07:53.789 "subsystem": "scheduler", 00:07:53.789 "config": [ 00:07:53.789 { 00:07:53.789 "method": "framework_set_scheduler", 00:07:53.789 "params": { 00:07:53.789 "name": "static" 00:07:53.789 } 00:07:53.789 } 00:07:53.789 ] 00:07:53.789 }, 00:07:53.789 { 00:07:53.789 "subsystem": "vhost_scsi", 00:07:53.789 "config": [] 00:07:53.789 }, 00:07:53.789 { 00:07:53.789 "subsystem": "vhost_blk", 00:07:53.789 "config": [] 00:07:53.789 }, 00:07:53.789 { 00:07:53.789 "subsystem": "ublk", 00:07:53.789 "config": [] 00:07:53.789 }, 00:07:53.789 { 00:07:53.789 "subsystem": "nbd", 00:07:53.789 "config": [] 00:07:53.789 }, 00:07:53.789 { 00:07:53.789 "subsystem": "nvmf", 00:07:53.789 "config": [ 00:07:53.789 { 00:07:53.789 "method": "nvmf_set_config", 00:07:53.789 "params": { 00:07:53.789 "discovery_filter": "match_any", 00:07:53.789 "admin_cmd_passthru": { 00:07:53.789 "identify_ctrlr": false 00:07:53.789 }, 00:07:53.789 "dhchap_digests": [ 00:07:53.789 "sha256", 00:07:53.789 "sha384", 00:07:53.789 "sha512" 00:07:53.789 ], 00:07:53.789 "dhchap_dhgroups": [ 00:07:53.789 "null", 00:07:53.789 "ffdhe2048", 00:07:53.789 "ffdhe3072", 00:07:53.789 "ffdhe4096", 00:07:53.789 "ffdhe6144", 00:07:53.789 "ffdhe8192" 00:07:53.789 ] 00:07:53.789 } 00:07:53.789 }, 00:07:53.789 { 00:07:53.790 "method": "nvmf_set_max_subsystems", 00:07:53.790 "params": { 00:07:53.790 "max_subsystems": 1024 00:07:53.790 } 00:07:53.790 }, 00:07:53.790 { 00:07:53.790 "method": "nvmf_set_crdt", 00:07:53.790 "params": { 00:07:53.790 "crdt1": 0, 00:07:53.790 "crdt2": 0, 00:07:53.790 "crdt3": 0 00:07:53.790 } 00:07:53.790 }, 00:07:53.790 { 00:07:53.790 "method": "nvmf_create_transport", 00:07:53.790 "params": { 00:07:53.790 "trtype": "TCP", 00:07:53.790 "max_queue_depth": 128, 00:07:53.790 "max_io_qpairs_per_ctrlr": 127, 00:07:53.790 "in_capsule_data_size": 4096, 00:07:53.790 "max_io_size": 131072, 00:07:53.790 "io_unit_size": 131072, 00:07:53.790 "max_aq_depth": 128, 00:07:53.790 "num_shared_buffers": 511, 00:07:53.790 "buf_cache_size": 4294967295, 00:07:53.790 "dif_insert_or_strip": false, 00:07:53.790 "zcopy": false, 00:07:53.790 "c2h_success": true, 00:07:53.790 "sock_priority": 0, 00:07:53.790 "abort_timeout_sec": 1, 00:07:53.790 "ack_timeout": 0, 00:07:53.790 "data_wr_pool_size": 0 00:07:53.790 } 00:07:53.790 } 00:07:53.790 ] 00:07:53.790 }, 00:07:53.790 { 00:07:53.790 "subsystem": "iscsi", 00:07:53.790 "config": [ 00:07:53.790 { 00:07:53.790 "method": "iscsi_set_options", 00:07:53.790 "params": { 00:07:53.790 "node_base": "iqn.2016-06.io.spdk", 00:07:53.790 "max_sessions": 128, 00:07:53.790 "max_connections_per_session": 2, 00:07:53.790 "max_queue_depth": 64, 00:07:53.790 "default_time2wait": 2, 00:07:53.790 "default_time2retain": 20, 00:07:53.790 "first_burst_length": 8192, 00:07:53.790 "immediate_data": true, 00:07:53.790 "allow_duplicated_isid": false, 00:07:53.790 "error_recovery_level": 0, 00:07:53.790 "nop_timeout": 60, 00:07:53.790 "nop_in_interval": 30, 00:07:53.790 "disable_chap": false, 00:07:53.790 "require_chap": false, 00:07:53.790 "mutual_chap": false, 00:07:53.790 "chap_group": 0, 00:07:53.790 "max_large_datain_per_connection": 64, 00:07:53.790 "max_r2t_per_connection": 4, 00:07:53.790 "pdu_pool_size": 36864, 00:07:53.790 "immediate_data_pool_size": 16384, 00:07:53.790 "data_out_pool_size": 2048 00:07:53.790 } 00:07:53.790 } 00:07:53.790 ] 00:07:53.790 } 00:07:53.790 ] 00:07:53.790 } 00:07:53.790 07:03:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:53.790 07:03:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2154040 00:07:53.790 07:03:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2154040 ']' 00:07:53.790 07:03:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2154040 00:07:53.790 07:03:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:53.790 07:03:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:53.790 07:03:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2154040 00:07:53.790 07:03:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:53.790 07:03:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:53.790 07:03:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2154040' 00:07:53.790 killing process with pid 2154040 00:07:53.790 07:03:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2154040 00:07:53.790 07:03:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2154040 00:07:54.051 07:03:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2154386 00:07:54.051 07:03:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:54.051 07:03:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:59.362 07:03:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2154386 00:07:59.362 07:03:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2154386 ']' 00:07:59.362 07:03:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2154386 00:07:59.362 07:03:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:59.362 07:03:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:59.362 07:03:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2154386 00:07:59.362 07:03:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:59.362 07:03:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:59.362 07:03:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2154386' 00:07:59.362 killing process with pid 2154386 00:07:59.362 07:03:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2154386 00:07:59.362 07:03:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2154386 00:07:59.362 07:03:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:59.362 07:03:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:59.362 00:07:59.362 real 0m6.571s 00:07:59.362 user 0m6.489s 00:07:59.362 sys 0m0.564s 00:07:59.362 07:03:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.362 07:03:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:59.362 ************************************ 00:07:59.362 END TEST skip_rpc_with_json 00:07:59.362 ************************************ 00:07:59.362 07:03:10 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:59.362 07:03:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:59.362 07:03:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.362 07:03:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.362 ************************************ 00:07:59.362 START TEST skip_rpc_with_delay 00:07:59.362 ************************************ 00:07:59.362 07:03:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:07:59.362 07:03:10 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:59.362 07:03:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:07:59.362 07:03:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:59.362 07:03:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:59.362 07:03:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.362 07:03:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:59.362 07:03:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.362 07:03:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:59.363 07:03:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.363 07:03:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:59.363 07:03:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:59.363 07:03:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:59.624 [2024-11-27 07:03:10.573586] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:59.624 07:03:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:07:59.624 07:03:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:59.624 07:03:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:59.624 07:03:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:59.624 00:07:59.624 real 0m0.082s 00:07:59.624 user 0m0.057s 00:07:59.624 sys 0m0.025s 00:07:59.624 07:03:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.624 07:03:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:59.624 ************************************ 00:07:59.624 END TEST skip_rpc_with_delay 00:07:59.624 ************************************ 00:07:59.624 07:03:10 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:59.624 07:03:10 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:59.624 07:03:10 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:59.624 07:03:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:59.624 07:03:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.624 07:03:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.624 ************************************ 00:07:59.624 START TEST exit_on_failed_rpc_init 00:07:59.624 ************************************ 00:07:59.624 07:03:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:07:59.624 07:03:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2155827 00:07:59.624 07:03:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2155827 00:07:59.624 07:03:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:59.624 07:03:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2155827 ']' 00:07:59.624 07:03:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.624 07:03:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:59.624 07:03:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.624 07:03:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:59.624 07:03:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:59.624 [2024-11-27 07:03:10.735470] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:07:59.624 [2024-11-27 07:03:10.735522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2155827 ] 00:07:59.624 [2024-11-27 07:03:10.820529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.885 [2024-11-27 07:03:10.852395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.456 07:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.456 07:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:08:00.456 07:03:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:00.456 07:03:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:00.456 07:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:08:00.456 07:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:00.456 07:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:00.456 07:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:00.456 07:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:00.456 07:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:00.456 07:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:00.456 07:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:00.456 07:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:00.456 07:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:08:00.456 07:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:00.456 [2024-11-27 07:03:11.562113] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:08:00.456 [2024-11-27 07:03:11.562178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2156177 ] 00:08:00.456 [2024-11-27 07:03:11.649766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.718 [2024-11-27 07:03:11.685393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.718 [2024-11-27 07:03:11.685443] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:00.718 [2024-11-27 07:03:11.685453] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:00.718 [2024-11-27 07:03:11.685460] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:00.719 07:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:08:00.719 07:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:00.719 07:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:08:00.719 07:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:08:00.719 07:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:08:00.719 07:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:00.719 07:03:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:00.719 07:03:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2155827 00:08:00.719 07:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2155827 ']' 00:08:00.719 07:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2155827 00:08:00.719 07:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:08:00.719 07:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:00.719 07:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2155827 00:08:00.719 07:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:00.719 07:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:00.719 07:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2155827' 00:08:00.719 killing process with pid 2155827 00:08:00.719 07:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2155827 00:08:00.719 07:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2155827 00:08:00.980 00:08:00.980 real 0m1.299s 00:08:00.980 user 0m1.494s 00:08:00.980 sys 0m0.386s 00:08:00.980 07:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.980 07:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:00.980 ************************************ 00:08:00.980 END TEST exit_on_failed_rpc_init 00:08:00.980 ************************************ 00:08:00.980 07:03:12 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:00.980 00:08:00.980 real 0m13.746s 00:08:00.980 user 0m13.304s 00:08:00.980 sys 0m1.575s 00:08:00.980 07:03:12 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.980 07:03:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.980 ************************************ 00:08:00.980 END TEST skip_rpc 00:08:00.980 ************************************ 00:08:00.980 07:03:12 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:00.980 07:03:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:00.980 07:03:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.980 07:03:12 -- common/autotest_common.sh@10 -- # set +x 00:08:00.980 ************************************ 00:08:00.980 START TEST rpc_client 00:08:00.980 ************************************ 00:08:00.980 07:03:12 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:01.242 * Looking for test storage... 00:08:01.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:08:01.242 07:03:12 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:01.242 07:03:12 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:08:01.242 07:03:12 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:01.242 07:03:12 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:01.242 07:03:12 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:01.242 07:03:12 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:01.242 07:03:12 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:01.242 07:03:12 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.242 07:03:12 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:08:01.242 07:03:12 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:08:01.242 07:03:12 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:08:01.242 07:03:12 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:08:01.242 07:03:12 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:08:01.242 07:03:12 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:08:01.242 07:03:12 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:01.242 07:03:12 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:08:01.242 07:03:12 rpc_client -- scripts/common.sh@345 -- # : 1 00:08:01.242 07:03:12 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:01.242 07:03:12 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.242 07:03:12 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:08:01.242 07:03:12 rpc_client -- scripts/common.sh@353 -- # local d=1 00:08:01.242 07:03:12 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.242 07:03:12 rpc_client -- scripts/common.sh@355 -- # echo 1 00:08:01.242 07:03:12 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:08:01.242 07:03:12 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:08:01.242 07:03:12 rpc_client -- scripts/common.sh@353 -- # local d=2 00:08:01.242 07:03:12 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.242 07:03:12 rpc_client -- scripts/common.sh@355 -- # echo 2 00:08:01.242 07:03:12 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:08:01.242 07:03:12 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:01.242 07:03:12 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:01.242 07:03:12 rpc_client -- scripts/common.sh@368 -- # return 0 00:08:01.242 07:03:12 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.242 07:03:12 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:01.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.242 --rc genhtml_branch_coverage=1 00:08:01.242 --rc genhtml_function_coverage=1 00:08:01.242 --rc genhtml_legend=1 00:08:01.242 --rc geninfo_all_blocks=1 00:08:01.242 --rc geninfo_unexecuted_blocks=1 00:08:01.242 00:08:01.242 ' 00:08:01.242 07:03:12 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:01.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.242 --rc genhtml_branch_coverage=1 00:08:01.242 --rc genhtml_function_coverage=1 00:08:01.242 --rc genhtml_legend=1 00:08:01.242 --rc geninfo_all_blocks=1 00:08:01.242 --rc geninfo_unexecuted_blocks=1 00:08:01.242 00:08:01.242 ' 00:08:01.242 07:03:12 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:01.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.242 --rc genhtml_branch_coverage=1 00:08:01.242 --rc genhtml_function_coverage=1 00:08:01.242 --rc genhtml_legend=1 00:08:01.242 --rc geninfo_all_blocks=1 00:08:01.242 --rc geninfo_unexecuted_blocks=1 00:08:01.242 00:08:01.242 ' 00:08:01.242 07:03:12 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:01.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.242 --rc genhtml_branch_coverage=1 00:08:01.242 --rc genhtml_function_coverage=1 00:08:01.242 --rc genhtml_legend=1 00:08:01.242 --rc geninfo_all_blocks=1 00:08:01.242 --rc geninfo_unexecuted_blocks=1 00:08:01.242 00:08:01.242 ' 00:08:01.242 07:03:12 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:08:01.242 OK 00:08:01.242 07:03:12 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:01.242 00:08:01.242 real 0m0.219s 00:08:01.242 user 0m0.125s 00:08:01.242 sys 0m0.108s 00:08:01.242 07:03:12 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.242 07:03:12 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:01.242 ************************************ 00:08:01.242 END TEST rpc_client 00:08:01.242 ************************************ 00:08:01.242 07:03:12 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:08:01.242 07:03:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:01.242 07:03:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.242 07:03:12 -- common/autotest_common.sh@10 -- # set +x 00:08:01.242 ************************************ 00:08:01.242 START TEST json_config 00:08:01.242 ************************************ 00:08:01.242 07:03:12 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:08:01.505 07:03:12 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:01.506 07:03:12 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:08:01.506 07:03:12 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:01.506 07:03:12 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:01.506 07:03:12 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:01.506 07:03:12 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:01.506 07:03:12 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:01.506 07:03:12 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.506 07:03:12 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:08:01.506 07:03:12 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:08:01.506 07:03:12 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:08:01.506 07:03:12 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:08:01.506 07:03:12 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:08:01.506 07:03:12 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:08:01.506 07:03:12 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:01.506 07:03:12 json_config -- scripts/common.sh@344 -- # case "$op" in 00:08:01.506 07:03:12 json_config -- scripts/common.sh@345 -- # : 1 00:08:01.506 07:03:12 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:01.506 07:03:12 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.506 07:03:12 json_config -- scripts/common.sh@365 -- # decimal 1 00:08:01.506 07:03:12 json_config -- scripts/common.sh@353 -- # local d=1 00:08:01.506 07:03:12 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.506 07:03:12 json_config -- scripts/common.sh@355 -- # echo 1 00:08:01.506 07:03:12 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:08:01.506 07:03:12 json_config -- scripts/common.sh@366 -- # decimal 2 00:08:01.506 07:03:12 json_config -- scripts/common.sh@353 -- # local d=2 00:08:01.506 07:03:12 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.506 07:03:12 json_config -- scripts/common.sh@355 -- # echo 2 00:08:01.506 07:03:12 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:08:01.506 07:03:12 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:01.506 07:03:12 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:01.506 07:03:12 json_config -- scripts/common.sh@368 -- # return 0 00:08:01.506 07:03:12 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.506 07:03:12 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:01.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.506 --rc genhtml_branch_coverage=1 00:08:01.506 --rc genhtml_function_coverage=1 00:08:01.506 --rc genhtml_legend=1 00:08:01.506 --rc geninfo_all_blocks=1 00:08:01.506 --rc geninfo_unexecuted_blocks=1 00:08:01.506 00:08:01.506 ' 00:08:01.506 07:03:12 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:01.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.506 --rc genhtml_branch_coverage=1 00:08:01.506 --rc genhtml_function_coverage=1 00:08:01.506 --rc genhtml_legend=1 00:08:01.506 --rc geninfo_all_blocks=1 00:08:01.506 --rc geninfo_unexecuted_blocks=1 00:08:01.506 00:08:01.506 ' 00:08:01.506 07:03:12 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:01.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.506 --rc genhtml_branch_coverage=1 00:08:01.506 --rc genhtml_function_coverage=1 00:08:01.506 --rc genhtml_legend=1 00:08:01.506 --rc geninfo_all_blocks=1 00:08:01.506 --rc geninfo_unexecuted_blocks=1 00:08:01.506 00:08:01.506 ' 00:08:01.506 07:03:12 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:01.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.506 --rc genhtml_branch_coverage=1 00:08:01.506 --rc genhtml_function_coverage=1 00:08:01.506 --rc genhtml_legend=1 00:08:01.506 --rc geninfo_all_blocks=1 00:08:01.506 --rc geninfo_unexecuted_blocks=1 00:08:01.506 00:08:01.506 ' 00:08:01.506 07:03:12 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:01.506 07:03:12 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:01.506 07:03:12 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.506 07:03:12 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.506 07:03:12 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.506 07:03:12 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.506 07:03:12 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.506 07:03:12 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.506 07:03:12 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.506 07:03:12 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.506 07:03:12 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.506 07:03:12 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.506 07:03:12 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:01.506 07:03:12 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:01.506 07:03:12 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.506 07:03:12 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.506 07:03:12 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:01.506 07:03:12 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:01.506 07:03:12 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:01.506 07:03:12 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:08:01.506 07:03:12 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.506 07:03:12 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.506 07:03:12 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.506 07:03:12 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.506 07:03:12 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.506 07:03:12 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.506 07:03:12 json_config -- paths/export.sh@5 -- # export PATH 00:08:01.506 07:03:12 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.506 07:03:12 json_config -- nvmf/common.sh@51 -- # : 0 00:08:01.506 07:03:12 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:01.506 07:03:12 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:01.506 07:03:12 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:01.506 07:03:12 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.506 07:03:12 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.506 07:03:12 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:01.506 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:01.506 07:03:12 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:01.506 07:03:12 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:01.506 07:03:12 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:01.506 07:03:12 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:08:01.506 07:03:12 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:01.506 07:03:12 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:01.507 07:03:12 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:01.507 07:03:12 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:01.507 07:03:12 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:08:01.507 07:03:12 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:08:01.507 07:03:12 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:08:01.507 07:03:12 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:08:01.507 07:03:12 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:08:01.507 07:03:12 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:08:01.507 07:03:12 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:08:01.507 07:03:12 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:08:01.507 07:03:12 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:08:01.507 07:03:12 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:01.507 07:03:12 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:08:01.507 INFO: JSON configuration test init 00:08:01.507 07:03:12 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:08:01.507 07:03:12 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:08:01.507 07:03:12 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:01.507 07:03:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:01.507 07:03:12 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:08:01.507 07:03:12 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:01.507 07:03:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:01.507 07:03:12 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:08:01.507 07:03:12 json_config -- json_config/common.sh@9 -- # local app=target 00:08:01.507 07:03:12 json_config -- json_config/common.sh@10 -- # shift 00:08:01.507 07:03:12 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:01.507 07:03:12 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:01.507 07:03:12 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:01.507 07:03:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:01.507 07:03:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:01.507 07:03:12 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2156388 00:08:01.507 07:03:12 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:01.507 Waiting for target to run... 00:08:01.507 07:03:12 json_config -- json_config/common.sh@25 -- # waitforlisten 2156388 /var/tmp/spdk_tgt.sock 00:08:01.507 07:03:12 json_config -- common/autotest_common.sh@835 -- # '[' -z 2156388 ']' 00:08:01.507 07:03:12 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:01.507 07:03:12 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.507 07:03:12 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:08:01.507 07:03:12 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:01.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:01.507 07:03:12 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.507 07:03:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:01.507 [2024-11-27 07:03:12.674535] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:08:01.507 [2024-11-27 07:03:12.674615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2156388 ] 00:08:02.079 [2024-11-27 07:03:12.975570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.079 [2024-11-27 07:03:12.999636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.341 07:03:13 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.341 07:03:13 json_config -- common/autotest_common.sh@868 -- # return 0 00:08:02.341 07:03:13 json_config -- json_config/common.sh@26 -- # echo '' 00:08:02.341 00:08:02.341 07:03:13 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:08:02.341 07:03:13 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:08:02.341 07:03:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:02.341 07:03:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:02.341 07:03:13 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:08:02.341 07:03:13 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:08:02.341 07:03:13 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:02.341 07:03:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:02.341 07:03:13 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:08:02.341 07:03:13 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:08:02.341 07:03:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:08:02.911 07:03:14 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:08:02.911 07:03:14 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:08:02.911 07:03:14 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:02.911 07:03:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:02.911 07:03:14 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:08:02.911 07:03:14 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:08:02.911 07:03:14 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:08:02.911 07:03:14 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:08:02.911 07:03:14 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:08:02.911 07:03:14 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:08:02.911 07:03:14 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:08:02.911 07:03:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:08:03.172 07:03:14 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:08:03.172 07:03:14 json_config -- json_config/json_config.sh@51 -- # local get_types 00:08:03.172 07:03:14 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:08:03.172 07:03:14 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:08:03.172 07:03:14 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:08:03.172 07:03:14 json_config -- json_config/json_config.sh@54 -- # sort 00:08:03.172 07:03:14 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:08:03.172 07:03:14 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:08:03.172 07:03:14 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:08:03.172 07:03:14 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:08:03.172 07:03:14 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:03.172 07:03:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:03.172 07:03:14 json_config -- json_config/json_config.sh@62 -- # return 0 00:08:03.172 07:03:14 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:08:03.172 07:03:14 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:08:03.172 07:03:14 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:08:03.172 07:03:14 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:08:03.172 07:03:14 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:08:03.172 07:03:14 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:08:03.172 07:03:14 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:03.172 07:03:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:03.172 07:03:14 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:08:03.172 07:03:14 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:08:03.172 07:03:14 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:08:03.172 07:03:14 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:03.172 07:03:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:03.433 MallocForNvmf0 00:08:03.433 07:03:14 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:03.433 07:03:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:03.694 MallocForNvmf1 00:08:03.694 07:03:14 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:08:03.694 07:03:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:08:03.694 [2024-11-27 07:03:14.809422] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.694 07:03:14 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:03.694 07:03:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:03.956 07:03:15 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:03.956 07:03:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:04.217 07:03:15 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:04.217 07:03:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:04.217 07:03:15 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:04.217 07:03:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:04.478 [2024-11-27 07:03:15.527593] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:04.478 07:03:15 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:08:04.478 07:03:15 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:04.478 07:03:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:04.478 07:03:15 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:08:04.478 07:03:15 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:04.478 07:03:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:04.478 07:03:15 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:08:04.478 07:03:15 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:04.478 07:03:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:04.738 MallocBdevForConfigChangeCheck 00:08:04.738 07:03:15 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:08:04.738 07:03:15 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:04.738 07:03:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:04.738 07:03:15 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:08:04.738 07:03:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:04.997 07:03:16 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:08:04.997 INFO: shutting down applications... 00:08:04.997 07:03:16 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:08:04.997 07:03:16 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:08:04.997 07:03:16 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:08:04.997 07:03:16 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:08:05.568 Calling clear_iscsi_subsystem 00:08:05.568 Calling clear_nvmf_subsystem 00:08:05.568 Calling clear_nbd_subsystem 00:08:05.568 Calling clear_ublk_subsystem 00:08:05.568 Calling clear_vhost_blk_subsystem 00:08:05.568 Calling clear_vhost_scsi_subsystem 00:08:05.568 Calling clear_bdev_subsystem 00:08:05.568 07:03:16 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:08:05.568 07:03:16 json_config -- json_config/json_config.sh@350 -- # count=100 00:08:05.568 07:03:16 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:08:05.568 07:03:16 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:05.568 07:03:16 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:08:05.568 07:03:16 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:08:05.828 07:03:17 json_config -- json_config/json_config.sh@352 -- # break 00:08:05.828 07:03:17 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:08:05.828 07:03:17 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:08:05.828 07:03:17 json_config -- json_config/common.sh@31 -- # local app=target 00:08:05.828 07:03:17 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:05.828 07:03:17 json_config -- json_config/common.sh@35 -- # [[ -n 2156388 ]] 00:08:05.828 07:03:17 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2156388 00:08:05.828 07:03:17 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:05.828 07:03:17 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:05.828 07:03:17 json_config -- json_config/common.sh@41 -- # kill -0 2156388 00:08:05.828 07:03:17 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:08:06.399 07:03:17 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:08:06.399 07:03:17 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:06.399 07:03:17 json_config -- json_config/common.sh@41 -- # kill -0 2156388 00:08:06.399 07:03:17 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:06.399 07:03:17 json_config -- json_config/common.sh@43 -- # break 00:08:06.399 07:03:17 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:06.399 07:03:17 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:06.399 SPDK target shutdown done 00:08:06.399 07:03:17 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:08:06.399 INFO: relaunching applications... 00:08:06.399 07:03:17 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:06.399 07:03:17 json_config -- json_config/common.sh@9 -- # local app=target 00:08:06.399 07:03:17 json_config -- json_config/common.sh@10 -- # shift 00:08:06.399 07:03:17 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:06.399 07:03:17 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:06.399 07:03:17 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:06.399 07:03:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:06.399 07:03:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:06.399 07:03:17 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2157526 00:08:06.399 07:03:17 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:06.399 Waiting for target to run... 00:08:06.399 07:03:17 json_config -- json_config/common.sh@25 -- # waitforlisten 2157526 /var/tmp/spdk_tgt.sock 00:08:06.399 07:03:17 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:06.399 07:03:17 json_config -- common/autotest_common.sh@835 -- # '[' -z 2157526 ']' 00:08:06.399 07:03:17 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:06.399 07:03:17 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:06.399 07:03:17 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:06.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:06.400 07:03:17 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:06.400 07:03:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:06.400 [2024-11-27 07:03:17.576904] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:08:06.400 [2024-11-27 07:03:17.576963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2157526 ] 00:08:06.971 [2024-11-27 07:03:18.015666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.971 [2024-11-27 07:03:18.048712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.543 [2024-11-27 07:03:18.547582] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.543 [2024-11-27 07:03:18.579899] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:07.543 07:03:18 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.543 07:03:18 json_config -- common/autotest_common.sh@868 -- # return 0 00:08:07.543 07:03:18 json_config -- json_config/common.sh@26 -- # echo '' 00:08:07.543 00:08:07.543 07:03:18 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:08:07.543 07:03:18 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:08:07.543 INFO: Checking if target configuration is the same... 00:08:07.543 07:03:18 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:08:07.543 07:03:18 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:07.543 07:03:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:07.543 + '[' 2 -ne 2 ']' 00:08:07.543 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:08:07.543 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:08:07.543 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:07.543 +++ basename /dev/fd/62 00:08:07.543 ++ mktemp /tmp/62.XXX 00:08:07.543 + tmp_file_1=/tmp/62.dYw 00:08:07.543 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:07.543 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:07.543 + tmp_file_2=/tmp/spdk_tgt_config.json.PWI 00:08:07.543 + ret=0 00:08:07.543 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:07.803 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:07.803 + diff -u /tmp/62.dYw /tmp/spdk_tgt_config.json.PWI 00:08:07.803 + echo 'INFO: JSON config files are the same' 00:08:07.803 INFO: JSON config files are the same 00:08:07.803 + rm /tmp/62.dYw /tmp/spdk_tgt_config.json.PWI 00:08:07.803 + exit 0 00:08:07.803 07:03:19 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:08:07.803 07:03:19 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:08:07.803 INFO: changing configuration and checking if this can be detected... 00:08:07.803 07:03:19 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:07.803 07:03:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:08.065 07:03:19 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:08.065 07:03:19 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:08:08.065 07:03:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:08.065 + '[' 2 -ne 2 ']' 00:08:08.065 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:08:08.065 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:08:08.065 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:08.065 +++ basename /dev/fd/62 00:08:08.065 ++ mktemp /tmp/62.XXX 00:08:08.065 + tmp_file_1=/tmp/62.CAR 00:08:08.065 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:08.065 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:08.065 + tmp_file_2=/tmp/spdk_tgt_config.json.8z9 00:08:08.065 + ret=0 00:08:08.065 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:08.326 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:08.588 + diff -u /tmp/62.CAR /tmp/spdk_tgt_config.json.8z9 00:08:08.588 + ret=1 00:08:08.588 + echo '=== Start of file: /tmp/62.CAR ===' 00:08:08.588 + cat /tmp/62.CAR 00:08:08.588 + echo '=== End of file: /tmp/62.CAR ===' 00:08:08.588 + echo '' 00:08:08.588 + echo '=== Start of file: /tmp/spdk_tgt_config.json.8z9 ===' 00:08:08.588 + cat /tmp/spdk_tgt_config.json.8z9 00:08:08.588 + echo '=== End of file: /tmp/spdk_tgt_config.json.8z9 ===' 00:08:08.588 + echo '' 00:08:08.588 + rm /tmp/62.CAR /tmp/spdk_tgt_config.json.8z9 00:08:08.588 + exit 1 00:08:08.588 07:03:19 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:08:08.588 INFO: configuration change detected. 00:08:08.588 07:03:19 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:08:08.588 07:03:19 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:08:08.588 07:03:19 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:08.588 07:03:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:08.588 07:03:19 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:08:08.588 07:03:19 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:08:08.588 07:03:19 json_config -- json_config/json_config.sh@324 -- # [[ -n 2157526 ]] 00:08:08.588 07:03:19 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:08:08.588 07:03:19 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:08:08.588 07:03:19 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:08.588 07:03:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:08.588 07:03:19 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:08:08.588 07:03:19 json_config -- json_config/json_config.sh@200 -- # uname -s 00:08:08.588 07:03:19 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:08:08.588 07:03:19 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:08:08.588 07:03:19 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:08:08.588 07:03:19 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:08:08.588 07:03:19 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:08.588 07:03:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:08.588 07:03:19 json_config -- json_config/json_config.sh@330 -- # killprocess 2157526 00:08:08.588 07:03:19 json_config -- common/autotest_common.sh@954 -- # '[' -z 2157526 ']' 00:08:08.588 07:03:19 json_config -- common/autotest_common.sh@958 -- # kill -0 2157526 00:08:08.588 07:03:19 json_config -- common/autotest_common.sh@959 -- # uname 00:08:08.588 07:03:19 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:08.588 07:03:19 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2157526 00:08:08.588 07:03:19 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:08.588 07:03:19 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:08.588 07:03:19 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2157526' 00:08:08.588 killing process with pid 2157526 00:08:08.588 07:03:19 json_config -- common/autotest_common.sh@973 -- # kill 2157526 00:08:08.588 07:03:19 json_config -- common/autotest_common.sh@978 -- # wait 2157526 00:08:08.850 07:03:19 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:08.850 07:03:19 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:08:08.850 07:03:19 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:08.850 07:03:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:08.850 07:03:20 json_config -- json_config/json_config.sh@335 -- # return 0 00:08:08.850 07:03:20 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:08:08.850 INFO: Success 00:08:08.850 00:08:08.850 real 0m7.625s 00:08:08.850 user 0m9.125s 00:08:08.850 sys 0m2.088s 00:08:08.850 07:03:20 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.850 07:03:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:08.850 ************************************ 00:08:08.850 END TEST json_config 00:08:08.850 ************************************ 00:08:09.113 07:03:20 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:09.113 07:03:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:09.113 07:03:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.113 07:03:20 -- common/autotest_common.sh@10 -- # set +x 00:08:09.113 ************************************ 00:08:09.113 START TEST json_config_extra_key 00:08:09.113 ************************************ 00:08:09.113 07:03:20 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:09.113 07:03:20 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:09.113 07:03:20 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:08:09.113 07:03:20 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:09.113 07:03:20 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:09.113 07:03:20 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:09.113 07:03:20 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:09.113 07:03:20 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:09.113 07:03:20 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:08:09.113 07:03:20 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:08:09.113 07:03:20 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:08:09.113 07:03:20 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:08:09.113 07:03:20 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:08:09.113 07:03:20 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:08:09.113 07:03:20 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:08:09.113 07:03:20 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:09.113 07:03:20 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:08:09.113 07:03:20 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:08:09.113 07:03:20 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:09.113 07:03:20 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:09.113 07:03:20 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:08:09.113 07:03:20 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:08:09.113 07:03:20 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:09.113 07:03:20 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:08:09.113 07:03:20 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:08:09.113 07:03:20 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:08:09.113 07:03:20 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:08:09.113 07:03:20 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:09.113 07:03:20 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:08:09.113 07:03:20 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:08:09.113 07:03:20 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:09.113 07:03:20 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:09.113 07:03:20 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:08:09.113 07:03:20 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:09.113 07:03:20 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:09.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.113 --rc genhtml_branch_coverage=1 00:08:09.113 --rc genhtml_function_coverage=1 00:08:09.113 --rc genhtml_legend=1 00:08:09.113 --rc geninfo_all_blocks=1 00:08:09.113 --rc geninfo_unexecuted_blocks=1 00:08:09.114 00:08:09.114 ' 00:08:09.114 07:03:20 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:09.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.114 --rc genhtml_branch_coverage=1 00:08:09.114 --rc genhtml_function_coverage=1 00:08:09.114 --rc genhtml_legend=1 00:08:09.114 --rc geninfo_all_blocks=1 00:08:09.114 --rc geninfo_unexecuted_blocks=1 00:08:09.114 00:08:09.114 ' 00:08:09.114 07:03:20 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:09.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.114 --rc genhtml_branch_coverage=1 00:08:09.114 --rc genhtml_function_coverage=1 00:08:09.114 --rc genhtml_legend=1 00:08:09.114 --rc geninfo_all_blocks=1 00:08:09.114 --rc geninfo_unexecuted_blocks=1 00:08:09.114 00:08:09.114 ' 00:08:09.114 07:03:20 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:09.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.114 --rc genhtml_branch_coverage=1 00:08:09.114 --rc genhtml_function_coverage=1 00:08:09.114 --rc genhtml_legend=1 00:08:09.114 --rc geninfo_all_blocks=1 00:08:09.114 --rc geninfo_unexecuted_blocks=1 00:08:09.114 00:08:09.114 ' 00:08:09.114 07:03:20 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:09.114 07:03:20 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:09.114 07:03:20 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.114 07:03:20 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.114 07:03:20 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.114 07:03:20 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.114 07:03:20 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.114 07:03:20 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.114 07:03:20 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.114 07:03:20 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.114 07:03:20 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.114 07:03:20 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.114 07:03:20 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:09.114 07:03:20 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:09.114 07:03:20 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.114 07:03:20 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.114 07:03:20 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:09.114 07:03:20 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:09.114 07:03:20 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:09.114 07:03:20 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:08:09.114 07:03:20 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.114 07:03:20 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.114 07:03:20 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.114 07:03:20 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.114 07:03:20 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.114 07:03:20 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.114 07:03:20 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:09.114 07:03:20 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.114 07:03:20 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:08:09.114 07:03:20 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:09.114 07:03:20 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:09.114 07:03:20 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:09.114 07:03:20 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.114 07:03:20 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.114 07:03:20 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:09.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:09.114 07:03:20 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:09.114 07:03:20 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:09.114 07:03:20 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:09.114 07:03:20 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:08:09.114 07:03:20 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:09.114 07:03:20 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:09.114 07:03:20 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:09.114 07:03:20 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:09.114 07:03:20 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:09.114 07:03:20 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:09.114 07:03:20 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:08:09.114 07:03:20 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:09.114 07:03:20 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:09.114 07:03:20 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:09.114 INFO: launching applications... 00:08:09.114 07:03:20 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:08:09.114 07:03:20 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:09.114 07:03:20 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:09.114 07:03:20 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:09.114 07:03:20 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:09.114 07:03:20 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:09.114 07:03:20 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:09.114 07:03:20 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:09.114 07:03:20 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2158309 00:08:09.114 07:03:20 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:09.114 Waiting for target to run... 00:08:09.114 07:03:20 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2158309 /var/tmp/spdk_tgt.sock 00:08:09.114 07:03:20 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2158309 ']' 00:08:09.114 07:03:20 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:09.114 07:03:20 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.114 07:03:20 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:09.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:09.114 07:03:20 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.114 07:03:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:09.114 07:03:20 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:08:09.376 [2024-11-27 07:03:20.344812] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:08:09.376 [2024-11-27 07:03:20.344887] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2158309 ] 00:08:09.637 [2024-11-27 07:03:20.629215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.637 [2024-11-27 07:03:20.656257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.209 07:03:21 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:10.209 07:03:21 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:08:10.209 07:03:21 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:10.209 00:08:10.209 07:03:21 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:10.209 INFO: shutting down applications... 00:08:10.209 07:03:21 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:10.209 07:03:21 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:10.209 07:03:21 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:10.210 07:03:21 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2158309 ]] 00:08:10.210 07:03:21 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2158309 00:08:10.210 07:03:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:10.210 07:03:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:10.210 07:03:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2158309 00:08:10.210 07:03:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:10.471 07:03:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:10.471 07:03:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:10.471 07:03:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2158309 00:08:10.471 07:03:21 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:10.471 07:03:21 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:10.471 07:03:21 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:10.471 07:03:21 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:10.471 SPDK target shutdown done 00:08:10.471 07:03:21 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:10.471 Success 00:08:10.471 00:08:10.471 real 0m1.560s 00:08:10.471 user 0m1.179s 00:08:10.471 sys 0m0.398s 00:08:10.471 07:03:21 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.471 07:03:21 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:10.471 ************************************ 00:08:10.471 END TEST json_config_extra_key 00:08:10.471 ************************************ 00:08:10.732 07:03:21 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:10.732 07:03:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:10.732 07:03:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.732 07:03:21 -- common/autotest_common.sh@10 -- # set +x 00:08:10.732 ************************************ 00:08:10.732 START TEST alias_rpc 00:08:10.732 ************************************ 00:08:10.732 07:03:21 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:10.732 * Looking for test storage... 00:08:10.732 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:08:10.732 07:03:21 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:10.732 07:03:21 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:10.732 07:03:21 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:10.732 07:03:21 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:10.732 07:03:21 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:10.732 07:03:21 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:10.732 07:03:21 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:10.732 07:03:21 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:10.732 07:03:21 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:10.732 07:03:21 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:10.732 07:03:21 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:10.732 07:03:21 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:10.732 07:03:21 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:10.732 07:03:21 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:10.732 07:03:21 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:10.732 07:03:21 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:10.732 07:03:21 alias_rpc -- scripts/common.sh@345 -- # : 1 00:08:10.732 07:03:21 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:10.732 07:03:21 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:10.732 07:03:21 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:10.732 07:03:21 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:08:10.732 07:03:21 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:10.732 07:03:21 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:08:10.732 07:03:21 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:10.732 07:03:21 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:10.732 07:03:21 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:08:10.732 07:03:21 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:10.732 07:03:21 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:08:10.732 07:03:21 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:10.732 07:03:21 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:10.732 07:03:21 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:10.732 07:03:21 alias_rpc -- scripts/common.sh@368 -- # return 0 00:08:10.732 07:03:21 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:10.732 07:03:21 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:10.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.732 --rc genhtml_branch_coverage=1 00:08:10.732 --rc genhtml_function_coverage=1 00:08:10.732 --rc genhtml_legend=1 00:08:10.733 --rc geninfo_all_blocks=1 00:08:10.733 --rc geninfo_unexecuted_blocks=1 00:08:10.733 00:08:10.733 ' 00:08:10.733 07:03:21 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:10.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.733 --rc genhtml_branch_coverage=1 00:08:10.733 --rc genhtml_function_coverage=1 00:08:10.733 --rc genhtml_legend=1 00:08:10.733 --rc geninfo_all_blocks=1 00:08:10.733 --rc geninfo_unexecuted_blocks=1 00:08:10.733 00:08:10.733 ' 00:08:10.733 07:03:21 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:10.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.733 --rc genhtml_branch_coverage=1 00:08:10.733 --rc genhtml_function_coverage=1 00:08:10.733 --rc genhtml_legend=1 00:08:10.733 --rc geninfo_all_blocks=1 00:08:10.733 --rc geninfo_unexecuted_blocks=1 00:08:10.733 00:08:10.733 ' 00:08:10.733 07:03:21 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:10.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.733 --rc genhtml_branch_coverage=1 00:08:10.733 --rc genhtml_function_coverage=1 00:08:10.733 --rc genhtml_legend=1 00:08:10.733 --rc geninfo_all_blocks=1 00:08:10.733 --rc geninfo_unexecuted_blocks=1 00:08:10.733 00:08:10.733 ' 00:08:10.733 07:03:21 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:10.733 07:03:21 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2158700 00:08:10.733 07:03:21 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2158700 00:08:10.733 07:03:21 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:10.733 07:03:21 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2158700 ']' 00:08:10.733 07:03:21 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.733 07:03:21 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.733 07:03:21 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.733 07:03:21 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.733 07:03:21 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.994 [2024-11-27 07:03:21.984079] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:08:10.994 [2024-11-27 07:03:21.984133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2158700 ] 00:08:10.994 [2024-11-27 07:03:22.068599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.994 [2024-11-27 07:03:22.100120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.937 07:03:22 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.937 07:03:22 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:11.937 07:03:22 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:08:11.937 07:03:22 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2158700 00:08:11.937 07:03:22 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2158700 ']' 00:08:11.937 07:03:22 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2158700 00:08:11.937 07:03:22 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:08:11.938 07:03:22 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.938 07:03:22 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2158700 00:08:11.938 07:03:23 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:11.938 07:03:23 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:11.938 07:03:23 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2158700' 00:08:11.938 killing process with pid 2158700 00:08:11.938 07:03:23 alias_rpc -- common/autotest_common.sh@973 -- # kill 2158700 00:08:11.938 07:03:23 alias_rpc -- common/autotest_common.sh@978 -- # wait 2158700 00:08:12.199 00:08:12.199 real 0m1.512s 00:08:12.199 user 0m1.675s 00:08:12.199 sys 0m0.409s 00:08:12.199 07:03:23 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.199 07:03:23 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.199 ************************************ 00:08:12.199 END TEST alias_rpc 00:08:12.199 ************************************ 00:08:12.199 07:03:23 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:08:12.199 07:03:23 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:08:12.199 07:03:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:12.199 07:03:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.199 07:03:23 -- common/autotest_common.sh@10 -- # set +x 00:08:12.199 ************************************ 00:08:12.199 START TEST spdkcli_tcp 00:08:12.199 ************************************ 00:08:12.199 07:03:23 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:08:12.460 * Looking for test storage... 00:08:12.460 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:08:12.460 07:03:23 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:12.460 07:03:23 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:08:12.460 07:03:23 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:12.460 07:03:23 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:12.460 07:03:23 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:12.460 07:03:23 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:12.460 07:03:23 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:12.460 07:03:23 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:12.460 07:03:23 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:12.460 07:03:23 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:12.460 07:03:23 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:12.460 07:03:23 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:12.460 07:03:23 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:12.460 07:03:23 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:12.460 07:03:23 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:12.460 07:03:23 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:12.460 07:03:23 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:08:12.460 07:03:23 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:12.460 07:03:23 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:12.460 07:03:23 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:12.460 07:03:23 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:08:12.460 07:03:23 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.460 07:03:23 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:08:12.460 07:03:23 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:12.460 07:03:23 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:12.460 07:03:23 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:08:12.460 07:03:23 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.460 07:03:23 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:08:12.460 07:03:23 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:12.460 07:03:23 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:12.460 07:03:23 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:12.460 07:03:23 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:08:12.460 07:03:23 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.460 07:03:23 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:12.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.460 --rc genhtml_branch_coverage=1 00:08:12.460 --rc genhtml_function_coverage=1 00:08:12.460 --rc genhtml_legend=1 00:08:12.460 --rc geninfo_all_blocks=1 00:08:12.460 --rc geninfo_unexecuted_blocks=1 00:08:12.460 00:08:12.460 ' 00:08:12.460 07:03:23 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:12.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.460 --rc genhtml_branch_coverage=1 00:08:12.460 --rc genhtml_function_coverage=1 00:08:12.460 --rc genhtml_legend=1 00:08:12.460 --rc geninfo_all_blocks=1 00:08:12.460 --rc geninfo_unexecuted_blocks=1 00:08:12.460 00:08:12.460 ' 00:08:12.460 07:03:23 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:12.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.460 --rc genhtml_branch_coverage=1 00:08:12.460 --rc genhtml_function_coverage=1 00:08:12.460 --rc genhtml_legend=1 00:08:12.460 --rc geninfo_all_blocks=1 00:08:12.460 --rc geninfo_unexecuted_blocks=1 00:08:12.460 00:08:12.460 ' 00:08:12.460 07:03:23 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:12.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.460 --rc genhtml_branch_coverage=1 00:08:12.460 --rc genhtml_function_coverage=1 00:08:12.460 --rc genhtml_legend=1 00:08:12.460 --rc geninfo_all_blocks=1 00:08:12.460 --rc geninfo_unexecuted_blocks=1 00:08:12.460 00:08:12.460 ' 00:08:12.460 07:03:23 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:08:12.460 07:03:23 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:08:12.460 07:03:23 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:08:12.460 07:03:23 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:12.460 07:03:23 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:12.460 07:03:23 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:12.460 07:03:23 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:12.460 07:03:23 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:12.460 07:03:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:12.460 07:03:23 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2159101 00:08:12.460 07:03:23 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2159101 00:08:12.460 07:03:23 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:12.460 07:03:23 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2159101 ']' 00:08:12.460 07:03:23 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.460 07:03:23 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.461 07:03:23 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.461 07:03:23 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.461 07:03:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:12.461 [2024-11-27 07:03:23.577993] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:08:12.461 [2024-11-27 07:03:23.578052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2159101 ] 00:08:12.461 [2024-11-27 07:03:23.661923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:12.721 [2024-11-27 07:03:23.693200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.721 [2024-11-27 07:03:23.693209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.293 07:03:24 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:13.293 07:03:24 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:08:13.293 07:03:24 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2159116 00:08:13.293 07:03:24 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:13.293 07:03:24 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:13.554 [ 00:08:13.554 "bdev_malloc_delete", 00:08:13.554 "bdev_malloc_create", 00:08:13.554 "bdev_null_resize", 00:08:13.554 "bdev_null_delete", 00:08:13.554 "bdev_null_create", 00:08:13.554 "bdev_nvme_cuse_unregister", 00:08:13.554 "bdev_nvme_cuse_register", 00:08:13.554 "bdev_opal_new_user", 00:08:13.554 "bdev_opal_set_lock_state", 00:08:13.554 "bdev_opal_delete", 00:08:13.554 "bdev_opal_get_info", 00:08:13.554 "bdev_opal_create", 00:08:13.554 "bdev_nvme_opal_revert", 00:08:13.554 "bdev_nvme_opal_init", 00:08:13.554 "bdev_nvme_send_cmd", 00:08:13.554 "bdev_nvme_set_keys", 00:08:13.554 "bdev_nvme_get_path_iostat", 00:08:13.554 "bdev_nvme_get_mdns_discovery_info", 00:08:13.554 "bdev_nvme_stop_mdns_discovery", 00:08:13.554 "bdev_nvme_start_mdns_discovery", 00:08:13.554 "bdev_nvme_set_multipath_policy", 00:08:13.554 "bdev_nvme_set_preferred_path", 00:08:13.554 "bdev_nvme_get_io_paths", 00:08:13.554 "bdev_nvme_remove_error_injection", 00:08:13.554 "bdev_nvme_add_error_injection", 00:08:13.554 "bdev_nvme_get_discovery_info", 00:08:13.554 "bdev_nvme_stop_discovery", 00:08:13.554 "bdev_nvme_start_discovery", 00:08:13.554 "bdev_nvme_get_controller_health_info", 00:08:13.554 "bdev_nvme_disable_controller", 00:08:13.554 "bdev_nvme_enable_controller", 00:08:13.554 "bdev_nvme_reset_controller", 00:08:13.554 "bdev_nvme_get_transport_statistics", 00:08:13.554 "bdev_nvme_apply_firmware", 00:08:13.554 "bdev_nvme_detach_controller", 00:08:13.554 "bdev_nvme_get_controllers", 00:08:13.554 "bdev_nvme_attach_controller", 00:08:13.554 "bdev_nvme_set_hotplug", 00:08:13.554 "bdev_nvme_set_options", 00:08:13.554 "bdev_passthru_delete", 00:08:13.554 "bdev_passthru_create", 00:08:13.554 "bdev_lvol_set_parent_bdev", 00:08:13.554 "bdev_lvol_set_parent", 00:08:13.554 "bdev_lvol_check_shallow_copy", 00:08:13.554 "bdev_lvol_start_shallow_copy", 00:08:13.554 "bdev_lvol_grow_lvstore", 00:08:13.554 "bdev_lvol_get_lvols", 00:08:13.554 "bdev_lvol_get_lvstores", 00:08:13.555 "bdev_lvol_delete", 00:08:13.555 "bdev_lvol_set_read_only", 00:08:13.555 "bdev_lvol_resize", 00:08:13.555 "bdev_lvol_decouple_parent", 00:08:13.555 "bdev_lvol_inflate", 00:08:13.555 "bdev_lvol_rename", 00:08:13.555 "bdev_lvol_clone_bdev", 00:08:13.555 "bdev_lvol_clone", 00:08:13.555 "bdev_lvol_snapshot", 00:08:13.555 "bdev_lvol_create", 00:08:13.555 "bdev_lvol_delete_lvstore", 00:08:13.555 "bdev_lvol_rename_lvstore", 00:08:13.555 "bdev_lvol_create_lvstore", 00:08:13.555 "bdev_raid_set_options", 00:08:13.555 "bdev_raid_remove_base_bdev", 00:08:13.555 "bdev_raid_add_base_bdev", 00:08:13.555 "bdev_raid_delete", 00:08:13.555 "bdev_raid_create", 00:08:13.555 "bdev_raid_get_bdevs", 00:08:13.555 "bdev_error_inject_error", 00:08:13.555 "bdev_error_delete", 00:08:13.555 "bdev_error_create", 00:08:13.555 "bdev_split_delete", 00:08:13.555 "bdev_split_create", 00:08:13.555 "bdev_delay_delete", 00:08:13.555 "bdev_delay_create", 00:08:13.555 "bdev_delay_update_latency", 00:08:13.555 "bdev_zone_block_delete", 00:08:13.555 "bdev_zone_block_create", 00:08:13.555 "blobfs_create", 00:08:13.555 "blobfs_detect", 00:08:13.555 "blobfs_set_cache_size", 00:08:13.555 "bdev_aio_delete", 00:08:13.555 "bdev_aio_rescan", 00:08:13.555 "bdev_aio_create", 00:08:13.555 "bdev_ftl_set_property", 00:08:13.555 "bdev_ftl_get_properties", 00:08:13.555 "bdev_ftl_get_stats", 00:08:13.555 "bdev_ftl_unmap", 00:08:13.555 "bdev_ftl_unload", 00:08:13.555 "bdev_ftl_delete", 00:08:13.555 "bdev_ftl_load", 00:08:13.555 "bdev_ftl_create", 00:08:13.555 "bdev_virtio_attach_controller", 00:08:13.555 "bdev_virtio_scsi_get_devices", 00:08:13.555 "bdev_virtio_detach_controller", 00:08:13.555 "bdev_virtio_blk_set_hotplug", 00:08:13.555 "bdev_iscsi_delete", 00:08:13.555 "bdev_iscsi_create", 00:08:13.555 "bdev_iscsi_set_options", 00:08:13.555 "accel_error_inject_error", 00:08:13.555 "ioat_scan_accel_module", 00:08:13.555 "dsa_scan_accel_module", 00:08:13.555 "iaa_scan_accel_module", 00:08:13.555 "vfu_virtio_create_fs_endpoint", 00:08:13.555 "vfu_virtio_create_scsi_endpoint", 00:08:13.555 "vfu_virtio_scsi_remove_target", 00:08:13.555 "vfu_virtio_scsi_add_target", 00:08:13.555 "vfu_virtio_create_blk_endpoint", 00:08:13.555 "vfu_virtio_delete_endpoint", 00:08:13.555 "keyring_file_remove_key", 00:08:13.555 "keyring_file_add_key", 00:08:13.555 "keyring_linux_set_options", 00:08:13.555 "fsdev_aio_delete", 00:08:13.555 "fsdev_aio_create", 00:08:13.555 "iscsi_get_histogram", 00:08:13.555 "iscsi_enable_histogram", 00:08:13.555 "iscsi_set_options", 00:08:13.555 "iscsi_get_auth_groups", 00:08:13.555 "iscsi_auth_group_remove_secret", 00:08:13.555 "iscsi_auth_group_add_secret", 00:08:13.555 "iscsi_delete_auth_group", 00:08:13.555 "iscsi_create_auth_group", 00:08:13.555 "iscsi_set_discovery_auth", 00:08:13.555 "iscsi_get_options", 00:08:13.555 "iscsi_target_node_request_logout", 00:08:13.555 "iscsi_target_node_set_redirect", 00:08:13.555 "iscsi_target_node_set_auth", 00:08:13.555 "iscsi_target_node_add_lun", 00:08:13.555 "iscsi_get_stats", 00:08:13.555 "iscsi_get_connections", 00:08:13.555 "iscsi_portal_group_set_auth", 00:08:13.555 "iscsi_start_portal_group", 00:08:13.555 "iscsi_delete_portal_group", 00:08:13.555 "iscsi_create_portal_group", 00:08:13.555 "iscsi_get_portal_groups", 00:08:13.555 "iscsi_delete_target_node", 00:08:13.555 "iscsi_target_node_remove_pg_ig_maps", 00:08:13.555 "iscsi_target_node_add_pg_ig_maps", 00:08:13.555 "iscsi_create_target_node", 00:08:13.555 "iscsi_get_target_nodes", 00:08:13.555 "iscsi_delete_initiator_group", 00:08:13.555 "iscsi_initiator_group_remove_initiators", 00:08:13.555 "iscsi_initiator_group_add_initiators", 00:08:13.555 "iscsi_create_initiator_group", 00:08:13.555 "iscsi_get_initiator_groups", 00:08:13.555 "nvmf_set_crdt", 00:08:13.555 "nvmf_set_config", 00:08:13.555 "nvmf_set_max_subsystems", 00:08:13.555 "nvmf_stop_mdns_prr", 00:08:13.555 "nvmf_publish_mdns_prr", 00:08:13.555 "nvmf_subsystem_get_listeners", 00:08:13.555 "nvmf_subsystem_get_qpairs", 00:08:13.555 "nvmf_subsystem_get_controllers", 00:08:13.555 "nvmf_get_stats", 00:08:13.555 "nvmf_get_transports", 00:08:13.555 "nvmf_create_transport", 00:08:13.555 "nvmf_get_targets", 00:08:13.555 "nvmf_delete_target", 00:08:13.555 "nvmf_create_target", 00:08:13.555 "nvmf_subsystem_allow_any_host", 00:08:13.555 "nvmf_subsystem_set_keys", 00:08:13.555 "nvmf_subsystem_remove_host", 00:08:13.555 "nvmf_subsystem_add_host", 00:08:13.555 "nvmf_ns_remove_host", 00:08:13.555 "nvmf_ns_add_host", 00:08:13.555 "nvmf_subsystem_remove_ns", 00:08:13.555 "nvmf_subsystem_set_ns_ana_group", 00:08:13.555 "nvmf_subsystem_add_ns", 00:08:13.555 "nvmf_subsystem_listener_set_ana_state", 00:08:13.555 "nvmf_discovery_get_referrals", 00:08:13.555 "nvmf_discovery_remove_referral", 00:08:13.555 "nvmf_discovery_add_referral", 00:08:13.555 "nvmf_subsystem_remove_listener", 00:08:13.555 "nvmf_subsystem_add_listener", 00:08:13.555 "nvmf_delete_subsystem", 00:08:13.555 "nvmf_create_subsystem", 00:08:13.555 "nvmf_get_subsystems", 00:08:13.555 "env_dpdk_get_mem_stats", 00:08:13.555 "nbd_get_disks", 00:08:13.555 "nbd_stop_disk", 00:08:13.555 "nbd_start_disk", 00:08:13.555 "ublk_recover_disk", 00:08:13.555 "ublk_get_disks", 00:08:13.555 "ublk_stop_disk", 00:08:13.555 "ublk_start_disk", 00:08:13.555 "ublk_destroy_target", 00:08:13.555 "ublk_create_target", 00:08:13.555 "virtio_blk_create_transport", 00:08:13.555 "virtio_blk_get_transports", 00:08:13.555 "vhost_controller_set_coalescing", 00:08:13.555 "vhost_get_controllers", 00:08:13.555 "vhost_delete_controller", 00:08:13.555 "vhost_create_blk_controller", 00:08:13.555 "vhost_scsi_controller_remove_target", 00:08:13.555 "vhost_scsi_controller_add_target", 00:08:13.555 "vhost_start_scsi_controller", 00:08:13.555 "vhost_create_scsi_controller", 00:08:13.555 "thread_set_cpumask", 00:08:13.555 "scheduler_set_options", 00:08:13.555 "framework_get_governor", 00:08:13.555 "framework_get_scheduler", 00:08:13.555 "framework_set_scheduler", 00:08:13.555 "framework_get_reactors", 00:08:13.555 "thread_get_io_channels", 00:08:13.555 "thread_get_pollers", 00:08:13.555 "thread_get_stats", 00:08:13.555 "framework_monitor_context_switch", 00:08:13.555 "spdk_kill_instance", 00:08:13.555 "log_enable_timestamps", 00:08:13.555 "log_get_flags", 00:08:13.555 "log_clear_flag", 00:08:13.555 "log_set_flag", 00:08:13.555 "log_get_level", 00:08:13.555 "log_set_level", 00:08:13.555 "log_get_print_level", 00:08:13.555 "log_set_print_level", 00:08:13.555 "framework_enable_cpumask_locks", 00:08:13.555 "framework_disable_cpumask_locks", 00:08:13.555 "framework_wait_init", 00:08:13.555 "framework_start_init", 00:08:13.555 "scsi_get_devices", 00:08:13.555 "bdev_get_histogram", 00:08:13.555 "bdev_enable_histogram", 00:08:13.555 "bdev_set_qos_limit", 00:08:13.555 "bdev_set_qd_sampling_period", 00:08:13.555 "bdev_get_bdevs", 00:08:13.555 "bdev_reset_iostat", 00:08:13.555 "bdev_get_iostat", 00:08:13.555 "bdev_examine", 00:08:13.555 "bdev_wait_for_examine", 00:08:13.555 "bdev_set_options", 00:08:13.555 "accel_get_stats", 00:08:13.555 "accel_set_options", 00:08:13.555 "accel_set_driver", 00:08:13.555 "accel_crypto_key_destroy", 00:08:13.555 "accel_crypto_keys_get", 00:08:13.555 "accel_crypto_key_create", 00:08:13.555 "accel_assign_opc", 00:08:13.555 "accel_get_module_info", 00:08:13.555 "accel_get_opc_assignments", 00:08:13.555 "vmd_rescan", 00:08:13.555 "vmd_remove_device", 00:08:13.555 "vmd_enable", 00:08:13.555 "sock_get_default_impl", 00:08:13.555 "sock_set_default_impl", 00:08:13.555 "sock_impl_set_options", 00:08:13.555 "sock_impl_get_options", 00:08:13.555 "iobuf_get_stats", 00:08:13.555 "iobuf_set_options", 00:08:13.555 "keyring_get_keys", 00:08:13.555 "vfu_tgt_set_base_path", 00:08:13.555 "framework_get_pci_devices", 00:08:13.555 "framework_get_config", 00:08:13.555 "framework_get_subsystems", 00:08:13.555 "fsdev_set_opts", 00:08:13.555 "fsdev_get_opts", 00:08:13.555 "trace_get_info", 00:08:13.555 "trace_get_tpoint_group_mask", 00:08:13.555 "trace_disable_tpoint_group", 00:08:13.555 "trace_enable_tpoint_group", 00:08:13.555 "trace_clear_tpoint_mask", 00:08:13.555 "trace_set_tpoint_mask", 00:08:13.555 "notify_get_notifications", 00:08:13.556 "notify_get_types", 00:08:13.556 "spdk_get_version", 00:08:13.556 "rpc_get_methods" 00:08:13.556 ] 00:08:13.556 07:03:24 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:13.556 07:03:24 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:13.556 07:03:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:13.556 07:03:24 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:13.556 07:03:24 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2159101 00:08:13.556 07:03:24 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2159101 ']' 00:08:13.556 07:03:24 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2159101 00:08:13.556 07:03:24 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:08:13.556 07:03:24 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:13.556 07:03:24 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2159101 00:08:13.556 07:03:24 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:13.556 07:03:24 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:13.556 07:03:24 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2159101' 00:08:13.556 killing process with pid 2159101 00:08:13.556 07:03:24 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2159101 00:08:13.556 07:03:24 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2159101 00:08:13.817 00:08:13.817 real 0m1.507s 00:08:13.817 user 0m2.745s 00:08:13.817 sys 0m0.443s 00:08:13.817 07:03:24 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.817 07:03:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:13.817 ************************************ 00:08:13.817 END TEST spdkcli_tcp 00:08:13.817 ************************************ 00:08:13.817 07:03:24 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:13.817 07:03:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:13.817 07:03:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.817 07:03:24 -- common/autotest_common.sh@10 -- # set +x 00:08:13.817 ************************************ 00:08:13.817 START TEST dpdk_mem_utility 00:08:13.817 ************************************ 00:08:13.818 07:03:24 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:13.818 * Looking for test storage... 00:08:13.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:08:13.818 07:03:24 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:13.818 07:03:24 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:08:13.818 07:03:24 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:14.079 07:03:25 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:14.079 07:03:25 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:14.079 07:03:25 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:14.079 07:03:25 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:14.079 07:03:25 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.079 07:03:25 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:08:14.079 07:03:25 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:08:14.079 07:03:25 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:08:14.079 07:03:25 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:08:14.079 07:03:25 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:08:14.079 07:03:25 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:08:14.079 07:03:25 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:14.079 07:03:25 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:08:14.079 07:03:25 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:08:14.079 07:03:25 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:14.079 07:03:25 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.079 07:03:25 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:08:14.079 07:03:25 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:08:14.079 07:03:25 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.079 07:03:25 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:08:14.079 07:03:25 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:08:14.079 07:03:25 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:08:14.079 07:03:25 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:08:14.079 07:03:25 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.079 07:03:25 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:08:14.079 07:03:25 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:08:14.079 07:03:25 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:14.079 07:03:25 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:14.079 07:03:25 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:08:14.079 07:03:25 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.079 07:03:25 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:14.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.079 --rc genhtml_branch_coverage=1 00:08:14.079 --rc genhtml_function_coverage=1 00:08:14.079 --rc genhtml_legend=1 00:08:14.079 --rc geninfo_all_blocks=1 00:08:14.079 --rc geninfo_unexecuted_blocks=1 00:08:14.079 00:08:14.079 ' 00:08:14.079 07:03:25 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:14.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.079 --rc genhtml_branch_coverage=1 00:08:14.079 --rc genhtml_function_coverage=1 00:08:14.079 --rc genhtml_legend=1 00:08:14.079 --rc geninfo_all_blocks=1 00:08:14.079 --rc geninfo_unexecuted_blocks=1 00:08:14.079 00:08:14.079 ' 00:08:14.079 07:03:25 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:14.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.079 --rc genhtml_branch_coverage=1 00:08:14.079 --rc genhtml_function_coverage=1 00:08:14.079 --rc genhtml_legend=1 00:08:14.079 --rc geninfo_all_blocks=1 00:08:14.079 --rc geninfo_unexecuted_blocks=1 00:08:14.079 00:08:14.079 ' 00:08:14.080 07:03:25 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:14.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.080 --rc genhtml_branch_coverage=1 00:08:14.080 --rc genhtml_function_coverage=1 00:08:14.080 --rc genhtml_legend=1 00:08:14.080 --rc geninfo_all_blocks=1 00:08:14.080 --rc geninfo_unexecuted_blocks=1 00:08:14.080 00:08:14.080 ' 00:08:14.080 07:03:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:08:14.080 07:03:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2159493 00:08:14.080 07:03:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2159493 00:08:14.080 07:03:25 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2159493 ']' 00:08:14.080 07:03:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:14.080 07:03:25 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.080 07:03:25 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:14.080 07:03:25 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.080 07:03:25 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:14.080 07:03:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:14.080 [2024-11-27 07:03:25.155337] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:08:14.080 [2024-11-27 07:03:25.155416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2159493 ] 00:08:14.080 [2024-11-27 07:03:25.242004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.080 [2024-11-27 07:03:25.277039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.025 07:03:25 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:15.025 07:03:25 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:08:15.025 07:03:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:15.025 07:03:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:15.025 07:03:25 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.025 07:03:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:15.025 { 00:08:15.025 "filename": "/tmp/spdk_mem_dump.txt" 00:08:15.025 } 00:08:15.025 07:03:25 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.025 07:03:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:08:15.025 DPDK memory size 818.000000 MiB in 1 heap(s) 00:08:15.025 1 heaps totaling size 818.000000 MiB 00:08:15.025 size: 818.000000 MiB heap id: 0 00:08:15.025 end heaps---------- 00:08:15.025 9 mempools totaling size 603.782043 MiB 00:08:15.025 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:15.025 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:15.025 size: 100.555481 MiB name: bdev_io_2159493 00:08:15.025 size: 50.003479 MiB name: msgpool_2159493 00:08:15.025 size: 36.509338 MiB name: fsdev_io_2159493 00:08:15.025 size: 21.763794 MiB name: PDU_Pool 00:08:15.025 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:15.025 size: 4.133484 MiB name: evtpool_2159493 00:08:15.025 size: 0.026123 MiB name: Session_Pool 00:08:15.025 end mempools------- 00:08:15.025 6 memzones totaling size 4.142822 MiB 00:08:15.025 size: 1.000366 MiB name: RG_ring_0_2159493 00:08:15.025 size: 1.000366 MiB name: RG_ring_1_2159493 00:08:15.025 size: 1.000366 MiB name: RG_ring_4_2159493 00:08:15.025 size: 1.000366 MiB name: RG_ring_5_2159493 00:08:15.025 size: 0.125366 MiB name: RG_ring_2_2159493 00:08:15.025 size: 0.015991 MiB name: RG_ring_3_2159493 00:08:15.025 end memzones------- 00:08:15.025 07:03:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:08:15.025 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:08:15.025 list of free elements. size: 10.852478 MiB 00:08:15.025 element at address: 0x200019200000 with size: 0.999878 MiB 00:08:15.025 element at address: 0x200019400000 with size: 0.999878 MiB 00:08:15.025 element at address: 0x200000400000 with size: 0.998535 MiB 00:08:15.025 element at address: 0x200032000000 with size: 0.994446 MiB 00:08:15.025 element at address: 0x200006400000 with size: 0.959839 MiB 00:08:15.025 element at address: 0x200012c00000 with size: 0.944275 MiB 00:08:15.025 element at address: 0x200019600000 with size: 0.936584 MiB 00:08:15.025 element at address: 0x200000200000 with size: 0.717346 MiB 00:08:15.025 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:08:15.025 element at address: 0x200000c00000 with size: 0.495422 MiB 00:08:15.025 element at address: 0x20000a600000 with size: 0.490723 MiB 00:08:15.025 element at address: 0x200019800000 with size: 0.485657 MiB 00:08:15.025 element at address: 0x200003e00000 with size: 0.481934 MiB 00:08:15.025 element at address: 0x200028200000 with size: 0.410034 MiB 00:08:15.025 element at address: 0x200000800000 with size: 0.355042 MiB 00:08:15.025 list of standard malloc elements. size: 199.218628 MiB 00:08:15.025 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:08:15.025 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:08:15.025 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:08:15.025 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:08:15.025 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:08:15.025 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:08:15.025 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:08:15.025 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:08:15.026 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:08:15.026 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:08:15.026 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:08:15.026 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:08:15.026 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:08:15.026 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:08:15.026 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:08:15.026 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:08:15.026 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:08:15.026 element at address: 0x20000085b040 with size: 0.000183 MiB 00:08:15.026 element at address: 0x20000085f300 with size: 0.000183 MiB 00:08:15.026 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:08:15.026 element at address: 0x20000087f680 with size: 0.000183 MiB 00:08:15.026 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:08:15.026 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:08:15.026 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:08:15.026 element at address: 0x200000cff000 with size: 0.000183 MiB 00:08:15.026 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:08:15.026 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:08:15.026 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:08:15.026 element at address: 0x200003efb980 with size: 0.000183 MiB 00:08:15.026 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:08:15.026 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:08:15.026 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:08:15.026 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:08:15.026 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:08:15.026 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:08:15.026 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:08:15.026 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:08:15.026 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:08:15.026 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:08:15.026 element at address: 0x200028268f80 with size: 0.000183 MiB 00:08:15.026 element at address: 0x200028269040 with size: 0.000183 MiB 00:08:15.026 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:08:15.026 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:08:15.026 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:08:15.026 list of memzone associated elements. size: 607.928894 MiB 00:08:15.026 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:08:15.026 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:15.026 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:08:15.026 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:15.026 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:08:15.026 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_2159493_0 00:08:15.026 element at address: 0x200000dff380 with size: 48.003052 MiB 00:08:15.026 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2159493_0 00:08:15.026 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:08:15.026 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2159493_0 00:08:15.026 element at address: 0x2000199be940 with size: 20.255554 MiB 00:08:15.026 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:15.026 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:08:15.026 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:15.026 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:08:15.026 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2159493_0 00:08:15.026 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:08:15.026 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2159493 00:08:15.026 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:08:15.026 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2159493 00:08:15.026 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:08:15.026 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:15.026 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:08:15.026 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:15.026 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:08:15.026 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:15.026 element at address: 0x200003efba40 with size: 1.008118 MiB 00:08:15.026 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:15.026 element at address: 0x200000cff180 with size: 1.000488 MiB 00:08:15.026 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2159493 00:08:15.026 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:08:15.026 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2159493 00:08:15.026 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:08:15.026 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2159493 00:08:15.026 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:08:15.026 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2159493 00:08:15.026 element at address: 0x20000087f740 with size: 0.500488 MiB 00:08:15.026 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2159493 00:08:15.026 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:08:15.026 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2159493 00:08:15.026 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:08:15.026 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:15.026 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:08:15.026 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:15.026 element at address: 0x20001987c540 with size: 0.250488 MiB 00:08:15.026 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:15.026 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:08:15.026 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2159493 00:08:15.026 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:08:15.026 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2159493 00:08:15.026 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:08:15.026 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:15.026 element at address: 0x200028269100 with size: 0.023743 MiB 00:08:15.026 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:15.026 element at address: 0x20000085b100 with size: 0.016113 MiB 00:08:15.026 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2159493 00:08:15.026 element at address: 0x20002826f240 with size: 0.002441 MiB 00:08:15.026 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:15.026 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:08:15.026 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2159493 00:08:15.026 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:08:15.026 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2159493 00:08:15.026 element at address: 0x20000085af00 with size: 0.000305 MiB 00:08:15.026 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2159493 00:08:15.026 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:08:15.026 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:15.026 07:03:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:15.026 07:03:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2159493 00:08:15.026 07:03:26 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2159493 ']' 00:08:15.026 07:03:26 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2159493 00:08:15.026 07:03:26 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:08:15.026 07:03:26 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:15.026 07:03:26 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2159493 00:08:15.026 07:03:26 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:15.026 07:03:26 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:15.026 07:03:26 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2159493' 00:08:15.026 killing process with pid 2159493 00:08:15.026 07:03:26 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2159493 00:08:15.026 07:03:26 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2159493 00:08:15.288 00:08:15.288 real 0m1.385s 00:08:15.288 user 0m1.427s 00:08:15.288 sys 0m0.419s 00:08:15.288 07:03:26 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.288 07:03:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:15.288 ************************************ 00:08:15.288 END TEST dpdk_mem_utility 00:08:15.288 ************************************ 00:08:15.288 07:03:26 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:08:15.288 07:03:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:15.288 07:03:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.288 07:03:26 -- common/autotest_common.sh@10 -- # set +x 00:08:15.288 ************************************ 00:08:15.288 START TEST event 00:08:15.288 ************************************ 00:08:15.288 07:03:26 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:08:15.288 * Looking for test storage... 00:08:15.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:08:15.288 07:03:26 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:15.288 07:03:26 event -- common/autotest_common.sh@1693 -- # lcov --version 00:08:15.288 07:03:26 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:15.550 07:03:26 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:15.550 07:03:26 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:15.550 07:03:26 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:15.550 07:03:26 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:15.550 07:03:26 event -- scripts/common.sh@336 -- # IFS=.-: 00:08:15.550 07:03:26 event -- scripts/common.sh@336 -- # read -ra ver1 00:08:15.550 07:03:26 event -- scripts/common.sh@337 -- # IFS=.-: 00:08:15.550 07:03:26 event -- scripts/common.sh@337 -- # read -ra ver2 00:08:15.550 07:03:26 event -- scripts/common.sh@338 -- # local 'op=<' 00:08:15.550 07:03:26 event -- scripts/common.sh@340 -- # ver1_l=2 00:08:15.550 07:03:26 event -- scripts/common.sh@341 -- # ver2_l=1 00:08:15.550 07:03:26 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:15.550 07:03:26 event -- scripts/common.sh@344 -- # case "$op" in 00:08:15.550 07:03:26 event -- scripts/common.sh@345 -- # : 1 00:08:15.550 07:03:26 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:15.550 07:03:26 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:15.550 07:03:26 event -- scripts/common.sh@365 -- # decimal 1 00:08:15.550 07:03:26 event -- scripts/common.sh@353 -- # local d=1 00:08:15.550 07:03:26 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:15.550 07:03:26 event -- scripts/common.sh@355 -- # echo 1 00:08:15.550 07:03:26 event -- scripts/common.sh@365 -- # ver1[v]=1 00:08:15.550 07:03:26 event -- scripts/common.sh@366 -- # decimal 2 00:08:15.550 07:03:26 event -- scripts/common.sh@353 -- # local d=2 00:08:15.550 07:03:26 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:15.550 07:03:26 event -- scripts/common.sh@355 -- # echo 2 00:08:15.550 07:03:26 event -- scripts/common.sh@366 -- # ver2[v]=2 00:08:15.550 07:03:26 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:15.550 07:03:26 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:15.550 07:03:26 event -- scripts/common.sh@368 -- # return 0 00:08:15.550 07:03:26 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:15.550 07:03:26 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:15.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.550 --rc genhtml_branch_coverage=1 00:08:15.550 --rc genhtml_function_coverage=1 00:08:15.550 --rc genhtml_legend=1 00:08:15.550 --rc geninfo_all_blocks=1 00:08:15.550 --rc geninfo_unexecuted_blocks=1 00:08:15.550 00:08:15.550 ' 00:08:15.550 07:03:26 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:15.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.550 --rc genhtml_branch_coverage=1 00:08:15.550 --rc genhtml_function_coverage=1 00:08:15.550 --rc genhtml_legend=1 00:08:15.550 --rc geninfo_all_blocks=1 00:08:15.550 --rc geninfo_unexecuted_blocks=1 00:08:15.550 00:08:15.550 ' 00:08:15.550 07:03:26 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:15.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.550 --rc genhtml_branch_coverage=1 00:08:15.550 --rc genhtml_function_coverage=1 00:08:15.550 --rc genhtml_legend=1 00:08:15.550 --rc geninfo_all_blocks=1 00:08:15.550 --rc geninfo_unexecuted_blocks=1 00:08:15.550 00:08:15.550 ' 00:08:15.550 07:03:26 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:15.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.550 --rc genhtml_branch_coverage=1 00:08:15.550 --rc genhtml_function_coverage=1 00:08:15.550 --rc genhtml_legend=1 00:08:15.550 --rc geninfo_all_blocks=1 00:08:15.550 --rc geninfo_unexecuted_blocks=1 00:08:15.550 00:08:15.550 ' 00:08:15.550 07:03:26 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:08:15.550 07:03:26 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:15.550 07:03:26 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:15.550 07:03:26 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:15.550 07:03:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.550 07:03:26 event -- common/autotest_common.sh@10 -- # set +x 00:08:15.550 ************************************ 00:08:15.550 START TEST event_perf 00:08:15.550 ************************************ 00:08:15.550 07:03:26 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:15.550 Running I/O for 1 seconds...[2024-11-27 07:03:26.616119] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:08:15.550 [2024-11-27 07:03:26.616213] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2159784 ] 00:08:15.550 [2024-11-27 07:03:26.707612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:15.550 [2024-11-27 07:03:26.751153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.550 [2024-11-27 07:03:26.751309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:15.550 [2024-11-27 07:03:26.751557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.550 [2024-11-27 07:03:26.751557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:16.936 Running I/O for 1 seconds... 00:08:16.936 lcore 0: 176939 00:08:16.936 lcore 1: 176941 00:08:16.936 lcore 2: 176941 00:08:16.936 lcore 3: 176939 00:08:16.936 done. 00:08:16.936 00:08:16.936 real 0m1.184s 00:08:16.936 user 0m4.092s 00:08:16.936 sys 0m0.090s 00:08:16.936 07:03:27 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.936 07:03:27 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:16.936 ************************************ 00:08:16.936 END TEST event_perf 00:08:16.936 ************************************ 00:08:16.936 07:03:27 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:08:16.936 07:03:27 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:16.936 07:03:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.936 07:03:27 event -- common/autotest_common.sh@10 -- # set +x 00:08:16.936 ************************************ 00:08:16.936 START TEST event_reactor 00:08:16.936 ************************************ 00:08:16.936 07:03:27 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:08:16.936 [2024-11-27 07:03:27.877908] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:08:16.936 [2024-11-27 07:03:27.878014] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2159960 ] 00:08:16.936 [2024-11-27 07:03:27.966401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.936 [2024-11-27 07:03:28.005140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.879 test_start 00:08:17.879 oneshot 00:08:17.879 tick 100 00:08:17.879 tick 100 00:08:17.879 tick 250 00:08:17.879 tick 100 00:08:17.879 tick 100 00:08:17.879 tick 250 00:08:17.879 tick 100 00:08:17.879 tick 500 00:08:17.879 tick 100 00:08:17.879 tick 100 00:08:17.879 tick 250 00:08:17.879 tick 100 00:08:17.879 tick 100 00:08:17.879 test_end 00:08:17.879 00:08:17.879 real 0m1.176s 00:08:17.879 user 0m1.089s 00:08:17.879 sys 0m0.082s 00:08:17.879 07:03:29 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.879 07:03:29 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:17.879 ************************************ 00:08:17.879 END TEST event_reactor 00:08:17.879 ************************************ 00:08:17.879 07:03:29 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:17.879 07:03:29 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:17.879 07:03:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.879 07:03:29 event -- common/autotest_common.sh@10 -- # set +x 00:08:18.161 ************************************ 00:08:18.161 START TEST event_reactor_perf 00:08:18.161 ************************************ 00:08:18.162 07:03:29 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:18.162 [2024-11-27 07:03:29.131788] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:08:18.162 [2024-11-27 07:03:29.131893] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2160307 ] 00:08:18.162 [2024-11-27 07:03:29.217661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.162 [2024-11-27 07:03:29.252603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.108 test_start 00:08:19.108 test_end 00:08:19.108 Performance: 537301 events per second 00:08:19.108 00:08:19.108 real 0m1.168s 00:08:19.108 user 0m1.086s 00:08:19.108 sys 0m0.078s 00:08:19.108 07:03:30 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.108 07:03:30 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:19.108 ************************************ 00:08:19.108 END TEST event_reactor_perf 00:08:19.108 ************************************ 00:08:19.370 07:03:30 event -- event/event.sh@49 -- # uname -s 00:08:19.370 07:03:30 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:19.370 07:03:30 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:08:19.370 07:03:30 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:19.370 07:03:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.370 07:03:30 event -- common/autotest_common.sh@10 -- # set +x 00:08:19.370 ************************************ 00:08:19.370 START TEST event_scheduler 00:08:19.370 ************************************ 00:08:19.370 07:03:30 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:08:19.370 * Looking for test storage... 00:08:19.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:08:19.370 07:03:30 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:19.370 07:03:30 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:08:19.370 07:03:30 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:19.370 07:03:30 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:19.370 07:03:30 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:19.370 07:03:30 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:19.370 07:03:30 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:19.370 07:03:30 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:08:19.370 07:03:30 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:08:19.370 07:03:30 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:08:19.370 07:03:30 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:08:19.370 07:03:30 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:08:19.370 07:03:30 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:08:19.370 07:03:30 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:08:19.370 07:03:30 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:19.370 07:03:30 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:08:19.370 07:03:30 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:08:19.370 07:03:30 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:19.370 07:03:30 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:19.370 07:03:30 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:08:19.370 07:03:30 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:08:19.370 07:03:30 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:19.370 07:03:30 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:08:19.370 07:03:30 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:08:19.370 07:03:30 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:08:19.370 07:03:30 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:08:19.370 07:03:30 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:19.370 07:03:30 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:08:19.370 07:03:30 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:08:19.370 07:03:30 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:19.370 07:03:30 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:19.370 07:03:30 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:08:19.370 07:03:30 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:19.370 07:03:30 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:19.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.370 --rc genhtml_branch_coverage=1 00:08:19.370 --rc genhtml_function_coverage=1 00:08:19.370 --rc genhtml_legend=1 00:08:19.370 --rc geninfo_all_blocks=1 00:08:19.370 --rc geninfo_unexecuted_blocks=1 00:08:19.370 00:08:19.370 ' 00:08:19.370 07:03:30 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:19.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.370 --rc genhtml_branch_coverage=1 00:08:19.370 --rc genhtml_function_coverage=1 00:08:19.370 --rc genhtml_legend=1 00:08:19.370 --rc geninfo_all_blocks=1 00:08:19.370 --rc geninfo_unexecuted_blocks=1 00:08:19.370 00:08:19.370 ' 00:08:19.370 07:03:30 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:19.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.370 --rc genhtml_branch_coverage=1 00:08:19.370 --rc genhtml_function_coverage=1 00:08:19.370 --rc genhtml_legend=1 00:08:19.370 --rc geninfo_all_blocks=1 00:08:19.370 --rc geninfo_unexecuted_blocks=1 00:08:19.370 00:08:19.370 ' 00:08:19.370 07:03:30 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:19.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.370 --rc genhtml_branch_coverage=1 00:08:19.370 --rc genhtml_function_coverage=1 00:08:19.370 --rc genhtml_legend=1 00:08:19.370 --rc geninfo_all_blocks=1 00:08:19.370 --rc geninfo_unexecuted_blocks=1 00:08:19.370 00:08:19.370 ' 00:08:19.370 07:03:30 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:19.370 07:03:30 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2160689 00:08:19.370 07:03:30 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:19.370 07:03:30 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:19.370 07:03:30 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2160689 00:08:19.370 07:03:30 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2160689 ']' 00:08:19.370 07:03:30 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.370 07:03:30 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:19.370 07:03:30 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.370 07:03:30 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:19.370 07:03:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:19.630 [2024-11-27 07:03:30.604539] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:08:19.630 [2024-11-27 07:03:30.604595] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2160689 ] 00:08:19.630 [2024-11-27 07:03:30.695122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:19.630 [2024-11-27 07:03:30.741201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.630 [2024-11-27 07:03:30.741336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.630 [2024-11-27 07:03:30.741288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.630 [2024-11-27 07:03:30.741337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:20.571 07:03:31 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.571 07:03:31 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:08:20.571 07:03:31 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:20.571 07:03:31 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.571 07:03:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:20.571 [2024-11-27 07:03:31.412061] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:08:20.571 [2024-11-27 07:03:31.412080] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:08:20.571 [2024-11-27 07:03:31.412091] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:20.571 [2024-11-27 07:03:31.412097] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:20.571 [2024-11-27 07:03:31.412102] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:20.571 07:03:31 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.571 07:03:31 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:20.571 07:03:31 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.571 07:03:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:20.571 [2024-11-27 07:03:31.476279] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:20.571 07:03:31 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.571 07:03:31 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:20.571 07:03:31 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:20.571 07:03:31 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.571 07:03:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:20.571 ************************************ 00:08:20.571 START TEST scheduler_create_thread 00:08:20.571 ************************************ 00:08:20.571 07:03:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:08:20.571 07:03:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:20.571 07:03:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.571 07:03:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:20.571 2 00:08:20.571 07:03:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.571 07:03:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:20.571 07:03:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.571 07:03:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:20.571 3 00:08:20.571 07:03:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.571 07:03:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:20.571 07:03:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.571 07:03:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:20.571 4 00:08:20.571 07:03:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.571 07:03:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:20.571 07:03:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.571 07:03:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:20.571 5 00:08:20.571 07:03:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.571 07:03:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:20.571 07:03:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.571 07:03:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:20.571 6 00:08:20.571 07:03:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.571 07:03:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:20.571 07:03:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.571 07:03:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:20.571 7 00:08:20.571 07:03:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.571 07:03:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:20.571 07:03:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.571 07:03:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:20.571 8 00:08:20.571 07:03:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.571 07:03:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:20.571 07:03:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.571 07:03:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:20.571 9 00:08:20.571 07:03:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.571 07:03:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:20.571 07:03:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.571 07:03:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:21.142 10 00:08:21.142 07:03:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.142 07:03:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:21.142 07:03:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.142 07:03:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:22.524 07:03:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.524 07:03:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:22.524 07:03:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:22.524 07:03:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.524 07:03:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:23.095 07:03:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.095 07:03:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:23.095 07:03:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.095 07:03:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:24.037 07:03:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.037 07:03:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:24.037 07:03:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:24.037 07:03:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.037 07:03:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:24.608 07:03:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.608 00:08:24.608 real 0m4.224s 00:08:24.608 user 0m0.028s 00:08:24.608 sys 0m0.004s 00:08:24.608 07:03:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.608 07:03:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:24.608 ************************************ 00:08:24.608 END TEST scheduler_create_thread 00:08:24.608 ************************************ 00:08:24.608 07:03:35 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:24.608 07:03:35 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2160689 00:08:24.608 07:03:35 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2160689 ']' 00:08:24.608 07:03:35 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2160689 00:08:24.608 07:03:35 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:08:24.608 07:03:35 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:24.608 07:03:35 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2160689 00:08:24.869 07:03:35 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:24.869 07:03:35 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:24.869 07:03:35 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2160689' 00:08:24.869 killing process with pid 2160689 00:08:24.869 07:03:35 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2160689 00:08:24.869 07:03:35 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2160689 00:08:25.130 [2024-11-27 07:03:36.118117] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:25.130 00:08:25.130 real 0m5.916s 00:08:25.130 user 0m13.824s 00:08:25.130 sys 0m0.432s 00:08:25.130 07:03:36 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.130 07:03:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:25.130 ************************************ 00:08:25.130 END TEST event_scheduler 00:08:25.130 ************************************ 00:08:25.130 07:03:36 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:25.130 07:03:36 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:25.130 07:03:36 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:25.130 07:03:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.130 07:03:36 event -- common/autotest_common.sh@10 -- # set +x 00:08:25.391 ************************************ 00:08:25.391 START TEST app_repeat 00:08:25.391 ************************************ 00:08:25.391 07:03:36 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:08:25.391 07:03:36 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.391 07:03:36 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:25.391 07:03:36 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:25.391 07:03:36 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:25.391 07:03:36 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:25.391 07:03:36 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:25.391 07:03:36 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:25.391 07:03:36 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2161782 00:08:25.391 07:03:36 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:25.391 07:03:36 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:25.391 07:03:36 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2161782' 00:08:25.391 Process app_repeat pid: 2161782 00:08:25.391 07:03:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:25.391 07:03:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:25.391 spdk_app_start Round 0 00:08:25.391 07:03:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2161782 /var/tmp/spdk-nbd.sock 00:08:25.391 07:03:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2161782 ']' 00:08:25.391 07:03:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:25.391 07:03:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:25.391 07:03:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:25.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:25.391 07:03:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:25.391 07:03:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:25.391 [2024-11-27 07:03:36.401023] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:08:25.391 [2024-11-27 07:03:36.401104] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2161782 ] 00:08:25.391 [2024-11-27 07:03:36.489633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:25.391 [2024-11-27 07:03:36.522452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.391 [2024-11-27 07:03:36.522453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.651 07:03:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:25.651 07:03:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:25.651 07:03:36 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:25.651 Malloc0 00:08:25.651 07:03:36 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:25.912 Malloc1 00:08:25.912 07:03:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:25.912 07:03:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.912 07:03:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:25.912 07:03:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:25.912 07:03:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:25.913 07:03:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:25.913 07:03:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:25.913 07:03:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.913 07:03:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:25.913 07:03:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:25.913 07:03:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:25.913 07:03:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:25.913 07:03:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:25.913 07:03:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:25.913 07:03:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:25.913 07:03:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:26.173 /dev/nbd0 00:08:26.173 07:03:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:26.173 07:03:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:26.173 07:03:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:26.173 07:03:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:26.173 07:03:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:26.173 07:03:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:26.173 07:03:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:26.173 07:03:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:26.173 07:03:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:26.173 07:03:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:26.173 07:03:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:26.173 1+0 records in 00:08:26.173 1+0 records out 00:08:26.173 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272916 s, 15.0 MB/s 00:08:26.173 07:03:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:26.173 07:03:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:26.173 07:03:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:26.173 07:03:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:26.173 07:03:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:26.173 07:03:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:26.173 07:03:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:26.173 07:03:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:26.173 /dev/nbd1 00:08:26.173 07:03:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:26.173 07:03:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:26.173 07:03:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:26.173 07:03:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:26.173 07:03:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:26.173 07:03:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:26.173 07:03:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:26.173 07:03:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:26.173 07:03:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:26.173 07:03:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:26.434 07:03:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:26.434 1+0 records in 00:08:26.434 1+0 records out 00:08:26.434 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217522 s, 18.8 MB/s 00:08:26.434 07:03:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:26.434 07:03:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:26.434 07:03:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:26.434 07:03:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:26.434 07:03:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:26.434 07:03:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:26.434 07:03:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:26.434 07:03:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:26.434 07:03:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:26.434 07:03:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:26.434 07:03:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:26.434 { 00:08:26.434 "nbd_device": "/dev/nbd0", 00:08:26.434 "bdev_name": "Malloc0" 00:08:26.434 }, 00:08:26.434 { 00:08:26.434 "nbd_device": "/dev/nbd1", 00:08:26.434 "bdev_name": "Malloc1" 00:08:26.434 } 00:08:26.434 ]' 00:08:26.434 07:03:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:26.434 { 00:08:26.434 "nbd_device": "/dev/nbd0", 00:08:26.434 "bdev_name": "Malloc0" 00:08:26.434 }, 00:08:26.434 { 00:08:26.434 "nbd_device": "/dev/nbd1", 00:08:26.434 "bdev_name": "Malloc1" 00:08:26.434 } 00:08:26.434 ]' 00:08:26.434 07:03:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:26.434 07:03:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:26.434 /dev/nbd1' 00:08:26.434 07:03:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:26.434 /dev/nbd1' 00:08:26.434 07:03:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:26.434 07:03:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:26.434 07:03:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:26.434 07:03:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:26.434 07:03:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:26.434 07:03:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:26.434 07:03:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:26.434 07:03:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:26.434 07:03:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:26.434 07:03:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:26.434 07:03:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:26.434 07:03:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:26.694 256+0 records in 00:08:26.694 256+0 records out 00:08:26.694 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127485 s, 82.3 MB/s 00:08:26.694 07:03:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:26.694 07:03:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:26.694 256+0 records in 00:08:26.694 256+0 records out 00:08:26.694 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118743 s, 88.3 MB/s 00:08:26.694 07:03:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:26.694 07:03:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:26.694 256+0 records in 00:08:26.694 256+0 records out 00:08:26.695 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131359 s, 79.8 MB/s 00:08:26.695 07:03:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:26.695 07:03:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:26.695 07:03:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:26.695 07:03:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:26.695 07:03:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:26.695 07:03:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:26.695 07:03:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:26.695 07:03:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:26.695 07:03:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:26.695 07:03:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:26.695 07:03:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:26.695 07:03:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:26.695 07:03:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:26.695 07:03:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:26.695 07:03:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:26.695 07:03:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:26.695 07:03:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:26.695 07:03:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:26.695 07:03:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:26.695 07:03:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:26.695 07:03:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:26.695 07:03:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:26.695 07:03:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.695 07:03:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.695 07:03:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:26.695 07:03:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:26.695 07:03:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.695 07:03:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:26.695 07:03:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:26.956 07:03:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:26.956 07:03:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:26.956 07:03:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:26.956 07:03:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.956 07:03:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.956 07:03:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:26.956 07:03:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:26.956 07:03:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.956 07:03:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:26.956 07:03:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:26.956 07:03:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:27.216 07:03:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:27.217 07:03:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:27.217 07:03:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:27.217 07:03:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:27.217 07:03:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:27.217 07:03:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:27.217 07:03:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:27.217 07:03:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:27.217 07:03:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:27.217 07:03:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:27.217 07:03:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:27.217 07:03:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:27.217 07:03:38 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:27.477 07:03:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:27.477 [2024-11-27 07:03:38.588759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:27.477 [2024-11-27 07:03:38.617549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.477 [2024-11-27 07:03:38.617550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.477 [2024-11-27 07:03:38.646716] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:27.477 [2024-11-27 07:03:38.646746] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:30.779 07:03:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:30.779 07:03:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:30.779 spdk_app_start Round 1 00:08:30.779 07:03:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2161782 /var/tmp/spdk-nbd.sock 00:08:30.779 07:03:41 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2161782 ']' 00:08:30.779 07:03:41 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:30.779 07:03:41 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:30.779 07:03:41 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:30.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:30.779 07:03:41 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:30.779 07:03:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:30.779 07:03:41 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:30.779 07:03:41 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:30.779 07:03:41 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:30.779 Malloc0 00:08:30.779 07:03:41 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:31.039 Malloc1 00:08:31.039 07:03:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:31.039 07:03:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.039 07:03:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:31.039 07:03:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:31.039 07:03:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:31.039 07:03:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:31.039 07:03:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:31.039 07:03:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.039 07:03:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:31.039 07:03:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:31.039 07:03:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:31.039 07:03:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:31.039 07:03:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:31.039 07:03:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:31.039 07:03:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:31.039 07:03:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:31.301 /dev/nbd0 00:08:31.301 07:03:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:31.301 07:03:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:31.301 07:03:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:31.301 07:03:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:31.301 07:03:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:31.301 07:03:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:31.301 07:03:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:31.301 07:03:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:31.301 07:03:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:31.301 07:03:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:31.301 07:03:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:31.301 1+0 records in 00:08:31.301 1+0 records out 00:08:31.301 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342512 s, 12.0 MB/s 00:08:31.301 07:03:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:31.301 07:03:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:31.301 07:03:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:31.301 07:03:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:31.301 07:03:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:31.301 07:03:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:31.301 07:03:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:31.301 07:03:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:31.301 /dev/nbd1 00:08:31.301 07:03:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:31.562 07:03:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:31.562 07:03:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:31.562 07:03:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:31.562 07:03:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:31.562 07:03:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:31.562 07:03:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:31.562 07:03:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:31.562 07:03:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:31.562 07:03:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:31.562 07:03:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:31.562 1+0 records in 00:08:31.562 1+0 records out 00:08:31.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306442 s, 13.4 MB/s 00:08:31.562 07:03:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:31.562 07:03:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:31.562 07:03:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:31.562 07:03:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:31.562 07:03:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:31.562 07:03:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:31.562 07:03:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:31.562 07:03:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:31.562 07:03:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.562 07:03:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:31.562 07:03:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:31.562 { 00:08:31.562 "nbd_device": "/dev/nbd0", 00:08:31.562 "bdev_name": "Malloc0" 00:08:31.562 }, 00:08:31.562 { 00:08:31.562 "nbd_device": "/dev/nbd1", 00:08:31.562 "bdev_name": "Malloc1" 00:08:31.562 } 00:08:31.562 ]' 00:08:31.562 07:03:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:31.562 { 00:08:31.562 "nbd_device": "/dev/nbd0", 00:08:31.562 "bdev_name": "Malloc0" 00:08:31.562 }, 00:08:31.562 { 00:08:31.562 "nbd_device": "/dev/nbd1", 00:08:31.562 "bdev_name": "Malloc1" 00:08:31.562 } 00:08:31.562 ]' 00:08:31.562 07:03:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:31.562 07:03:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:31.562 /dev/nbd1' 00:08:31.562 07:03:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:31.562 /dev/nbd1' 00:08:31.562 07:03:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:31.824 07:03:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:31.824 07:03:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:31.824 07:03:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:31.824 07:03:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:31.824 07:03:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:31.824 07:03:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:31.824 07:03:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:31.824 07:03:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:31.824 07:03:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:31.824 07:03:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:31.824 07:03:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:31.824 256+0 records in 00:08:31.824 256+0 records out 00:08:31.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127745 s, 82.1 MB/s 00:08:31.824 07:03:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:31.824 07:03:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:31.824 256+0 records in 00:08:31.824 256+0 records out 00:08:31.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121912 s, 86.0 MB/s 00:08:31.824 07:03:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:31.824 07:03:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:31.824 256+0 records in 00:08:31.824 256+0 records out 00:08:31.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129476 s, 81.0 MB/s 00:08:31.824 07:03:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:31.824 07:03:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:31.824 07:03:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:31.824 07:03:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:31.824 07:03:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:31.824 07:03:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:31.824 07:03:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:31.824 07:03:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:31.824 07:03:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:31.824 07:03:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:31.824 07:03:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:31.824 07:03:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:31.824 07:03:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:31.824 07:03:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.824 07:03:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:31.824 07:03:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:31.824 07:03:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:31.824 07:03:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:31.824 07:03:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:32.085 07:03:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:32.085 07:03:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:32.085 07:03:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:32.085 07:03:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:32.085 07:03:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:32.085 07:03:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:32.085 07:03:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:32.085 07:03:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:32.085 07:03:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:32.085 07:03:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:32.085 07:03:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:32.085 07:03:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:32.085 07:03:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:32.085 07:03:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:32.085 07:03:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:32.085 07:03:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:32.085 07:03:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:32.085 07:03:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:32.085 07:03:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:32.085 07:03:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:32.085 07:03:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:32.346 07:03:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:32.346 07:03:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:32.346 07:03:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:32.346 07:03:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:32.346 07:03:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:32.346 07:03:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:32.346 07:03:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:32.346 07:03:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:32.346 07:03:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:32.346 07:03:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:32.346 07:03:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:32.346 07:03:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:32.346 07:03:43 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:32.606 07:03:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:32.606 [2024-11-27 07:03:43.739920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:32.607 [2024-11-27 07:03:43.767816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.607 [2024-11-27 07:03:43.767816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.607 [2024-11-27 07:03:43.797472] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:32.607 [2024-11-27 07:03:43.797503] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:35.903 07:03:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:35.903 07:03:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:35.903 spdk_app_start Round 2 00:08:35.903 07:03:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2161782 /var/tmp/spdk-nbd.sock 00:08:35.903 07:03:46 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2161782 ']' 00:08:35.903 07:03:46 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:35.903 07:03:46 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:35.903 07:03:46 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:35.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:35.903 07:03:46 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:35.903 07:03:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:35.903 07:03:46 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:35.903 07:03:46 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:35.904 07:03:46 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:35.904 Malloc0 00:08:35.904 07:03:47 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:36.163 Malloc1 00:08:36.163 07:03:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:36.163 07:03:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:36.163 07:03:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:36.163 07:03:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:36.163 07:03:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:36.163 07:03:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:36.163 07:03:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:36.163 07:03:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:36.163 07:03:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:36.163 07:03:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:36.163 07:03:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:36.163 07:03:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:36.163 07:03:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:36.163 07:03:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:36.163 07:03:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:36.163 07:03:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:36.422 /dev/nbd0 00:08:36.422 07:03:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:36.422 07:03:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:36.422 07:03:47 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:36.422 07:03:47 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:36.422 07:03:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:36.422 07:03:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:36.422 07:03:47 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:36.422 07:03:47 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:36.422 07:03:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:36.422 07:03:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:36.422 07:03:47 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:36.422 1+0 records in 00:08:36.422 1+0 records out 00:08:36.422 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269727 s, 15.2 MB/s 00:08:36.422 07:03:47 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:36.422 07:03:47 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:36.422 07:03:47 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:36.422 07:03:47 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:36.422 07:03:47 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:36.422 07:03:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:36.422 07:03:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:36.422 07:03:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:36.682 /dev/nbd1 00:08:36.682 07:03:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:36.682 07:03:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:36.682 07:03:47 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:36.682 07:03:47 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:36.682 07:03:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:36.682 07:03:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:36.682 07:03:47 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:36.682 07:03:47 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:36.682 07:03:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:36.682 07:03:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:36.682 07:03:47 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:36.682 1+0 records in 00:08:36.682 1+0 records out 00:08:36.682 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274904 s, 14.9 MB/s 00:08:36.682 07:03:47 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:36.682 07:03:47 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:36.682 07:03:47 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:36.682 07:03:47 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:36.682 07:03:47 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:36.682 07:03:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:36.682 07:03:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:36.682 07:03:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:36.682 07:03:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:36.682 07:03:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:36.682 07:03:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:36.682 { 00:08:36.682 "nbd_device": "/dev/nbd0", 00:08:36.682 "bdev_name": "Malloc0" 00:08:36.682 }, 00:08:36.682 { 00:08:36.682 "nbd_device": "/dev/nbd1", 00:08:36.682 "bdev_name": "Malloc1" 00:08:36.682 } 00:08:36.682 ]' 00:08:36.682 07:03:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:36.682 { 00:08:36.682 "nbd_device": "/dev/nbd0", 00:08:36.682 "bdev_name": "Malloc0" 00:08:36.682 }, 00:08:36.682 { 00:08:36.682 "nbd_device": "/dev/nbd1", 00:08:36.682 "bdev_name": "Malloc1" 00:08:36.682 } 00:08:36.682 ]' 00:08:36.682 07:03:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:36.942 07:03:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:36.942 /dev/nbd1' 00:08:36.942 07:03:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:36.942 /dev/nbd1' 00:08:36.942 07:03:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:36.942 07:03:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:36.942 07:03:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:36.942 07:03:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:36.942 07:03:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:36.942 07:03:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:36.942 07:03:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:36.942 07:03:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:36.942 07:03:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:36.942 07:03:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:36.942 07:03:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:36.942 07:03:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:36.942 256+0 records in 00:08:36.942 256+0 records out 00:08:36.942 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127315 s, 82.4 MB/s 00:08:36.942 07:03:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:36.942 07:03:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:36.942 256+0 records in 00:08:36.942 256+0 records out 00:08:36.942 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123736 s, 84.7 MB/s 00:08:36.942 07:03:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:36.942 07:03:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:36.942 256+0 records in 00:08:36.942 256+0 records out 00:08:36.942 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129798 s, 80.8 MB/s 00:08:36.942 07:03:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:36.942 07:03:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:36.942 07:03:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:36.942 07:03:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:36.942 07:03:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:36.942 07:03:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:36.942 07:03:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:36.942 07:03:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:36.942 07:03:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:36.942 07:03:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:36.942 07:03:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:36.943 07:03:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:36.943 07:03:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:36.943 07:03:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:36.943 07:03:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:36.943 07:03:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:36.943 07:03:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:36.943 07:03:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:36.943 07:03:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:37.208 07:03:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:37.208 07:03:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:37.208 07:03:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:37.208 07:03:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:37.208 07:03:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:37.208 07:03:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:37.208 07:03:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:37.208 07:03:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:37.208 07:03:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:37.208 07:03:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:37.208 07:03:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:37.208 07:03:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:37.208 07:03:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:37.208 07:03:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:37.208 07:03:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:37.208 07:03:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:37.208 07:03:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:37.208 07:03:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:37.208 07:03:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:37.208 07:03:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:37.208 07:03:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:37.558 07:03:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:37.558 07:03:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:37.558 07:03:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:37.558 07:03:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:37.558 07:03:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:37.558 07:03:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:37.558 07:03:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:37.558 07:03:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:37.558 07:03:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:37.558 07:03:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:37.558 07:03:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:37.558 07:03:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:37.558 07:03:48 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:37.863 07:03:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:37.863 [2024-11-27 07:03:48.898042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:37.863 [2024-11-27 07:03:48.926283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.863 [2024-11-27 07:03:48.926370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.863 [2024-11-27 07:03:48.955415] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:37.863 [2024-11-27 07:03:48.955446] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:41.224 07:03:51 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2161782 /var/tmp/spdk-nbd.sock 00:08:41.224 07:03:51 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2161782 ']' 00:08:41.225 07:03:51 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:41.225 07:03:51 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:41.225 07:03:51 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:41.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:41.225 07:03:51 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:41.225 07:03:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:41.225 07:03:52 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:41.225 07:03:52 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:41.225 07:03:52 event.app_repeat -- event/event.sh@39 -- # killprocess 2161782 00:08:41.225 07:03:52 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2161782 ']' 00:08:41.225 07:03:52 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2161782 00:08:41.225 07:03:52 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:08:41.225 07:03:52 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:41.225 07:03:52 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2161782 00:08:41.225 07:03:52 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:41.225 07:03:52 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:41.225 07:03:52 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2161782' 00:08:41.225 killing process with pid 2161782 00:08:41.225 07:03:52 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2161782 00:08:41.225 07:03:52 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2161782 00:08:41.225 spdk_app_start is called in Round 0. 00:08:41.225 Shutdown signal received, stop current app iteration 00:08:41.225 Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 reinitialization... 00:08:41.225 spdk_app_start is called in Round 1. 00:08:41.225 Shutdown signal received, stop current app iteration 00:08:41.225 Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 reinitialization... 00:08:41.225 spdk_app_start is called in Round 2. 00:08:41.225 Shutdown signal received, stop current app iteration 00:08:41.225 Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 reinitialization... 00:08:41.225 spdk_app_start is called in Round 3. 00:08:41.225 Shutdown signal received, stop current app iteration 00:08:41.225 07:03:52 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:41.225 07:03:52 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:41.225 00:08:41.225 real 0m15.798s 00:08:41.225 user 0m34.676s 00:08:41.225 sys 0m2.285s 00:08:41.225 07:03:52 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.225 07:03:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:41.225 ************************************ 00:08:41.225 END TEST app_repeat 00:08:41.225 ************************************ 00:08:41.225 07:03:52 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:41.225 07:03:52 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:08:41.225 07:03:52 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:41.225 07:03:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.225 07:03:52 event -- common/autotest_common.sh@10 -- # set +x 00:08:41.225 ************************************ 00:08:41.225 START TEST cpu_locks 00:08:41.225 ************************************ 00:08:41.225 07:03:52 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:08:41.225 * Looking for test storage... 00:08:41.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:08:41.225 07:03:52 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:41.225 07:03:52 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:08:41.225 07:03:52 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:41.225 07:03:52 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:41.225 07:03:52 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:41.225 07:03:52 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:41.225 07:03:52 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:41.225 07:03:52 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:08:41.225 07:03:52 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:08:41.225 07:03:52 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:08:41.225 07:03:52 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:08:41.225 07:03:52 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:08:41.225 07:03:52 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:08:41.225 07:03:52 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:08:41.225 07:03:52 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:41.225 07:03:52 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:08:41.225 07:03:52 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:08:41.225 07:03:52 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:41.225 07:03:52 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:41.225 07:03:52 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:08:41.225 07:03:52 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:08:41.225 07:03:52 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:41.225 07:03:52 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:08:41.225 07:03:52 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:08:41.225 07:03:52 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:08:41.225 07:03:52 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:08:41.225 07:03:52 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:41.225 07:03:52 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:08:41.487 07:03:52 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:08:41.487 07:03:52 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:41.487 07:03:52 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:41.487 07:03:52 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:08:41.487 07:03:52 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:41.487 07:03:52 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:41.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.487 --rc genhtml_branch_coverage=1 00:08:41.487 --rc genhtml_function_coverage=1 00:08:41.487 --rc genhtml_legend=1 00:08:41.487 --rc geninfo_all_blocks=1 00:08:41.487 --rc geninfo_unexecuted_blocks=1 00:08:41.487 00:08:41.487 ' 00:08:41.487 07:03:52 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:41.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.487 --rc genhtml_branch_coverage=1 00:08:41.487 --rc genhtml_function_coverage=1 00:08:41.487 --rc genhtml_legend=1 00:08:41.487 --rc geninfo_all_blocks=1 00:08:41.487 --rc geninfo_unexecuted_blocks=1 00:08:41.487 00:08:41.487 ' 00:08:41.487 07:03:52 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:41.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.487 --rc genhtml_branch_coverage=1 00:08:41.487 --rc genhtml_function_coverage=1 00:08:41.487 --rc genhtml_legend=1 00:08:41.487 --rc geninfo_all_blocks=1 00:08:41.487 --rc geninfo_unexecuted_blocks=1 00:08:41.487 00:08:41.487 ' 00:08:41.487 07:03:52 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:41.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.487 --rc genhtml_branch_coverage=1 00:08:41.487 --rc genhtml_function_coverage=1 00:08:41.487 --rc genhtml_legend=1 00:08:41.487 --rc geninfo_all_blocks=1 00:08:41.487 --rc geninfo_unexecuted_blocks=1 00:08:41.487 00:08:41.487 ' 00:08:41.487 07:03:52 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:41.487 07:03:52 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:41.487 07:03:52 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:41.487 07:03:52 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:41.487 07:03:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:41.487 07:03:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.487 07:03:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:41.487 ************************************ 00:08:41.487 START TEST default_locks 00:08:41.487 ************************************ 00:08:41.487 07:03:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:08:41.487 07:03:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2165360 00:08:41.487 07:03:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2165360 00:08:41.487 07:03:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:41.487 07:03:52 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2165360 ']' 00:08:41.487 07:03:52 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.487 07:03:52 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:41.487 07:03:52 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.487 07:03:52 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:41.487 07:03:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:41.487 [2024-11-27 07:03:52.536910] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:08:41.487 [2024-11-27 07:03:52.536958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2165360 ] 00:08:41.487 [2024-11-27 07:03:52.621454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.487 [2024-11-27 07:03:52.653911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.430 07:03:53 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:42.430 07:03:53 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:08:42.430 07:03:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2165360 00:08:42.430 07:03:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2165360 00:08:42.430 07:03:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:42.691 lslocks: write error 00:08:42.691 07:03:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2165360 00:08:42.691 07:03:53 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2165360 ']' 00:08:42.691 07:03:53 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2165360 00:08:42.691 07:03:53 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:08:42.692 07:03:53 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:42.692 07:03:53 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2165360 00:08:42.692 07:03:53 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:42.692 07:03:53 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:42.692 07:03:53 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2165360' 00:08:42.692 killing process with pid 2165360 00:08:42.692 07:03:53 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2165360 00:08:42.692 07:03:53 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2165360 00:08:42.953 07:03:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2165360 00:08:42.953 07:03:54 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:08:42.953 07:03:54 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2165360 00:08:42.953 07:03:54 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:42.953 07:03:54 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:42.953 07:03:54 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:42.953 07:03:54 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:42.953 07:03:54 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2165360 00:08:42.953 07:03:54 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2165360 ']' 00:08:42.953 07:03:54 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.953 07:03:54 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:42.953 07:03:54 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.953 07:03:54 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:42.953 07:03:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:42.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2165360) - No such process 00:08:42.953 ERROR: process (pid: 2165360) is no longer running 00:08:42.953 07:03:54 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:42.953 07:03:54 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:08:42.953 07:03:54 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:08:42.953 07:03:54 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:42.953 07:03:54 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:42.953 07:03:54 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:42.953 07:03:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:42.953 07:03:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:42.953 07:03:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:42.953 07:03:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:42.953 00:08:42.953 real 0m1.608s 00:08:42.953 user 0m1.717s 00:08:42.953 sys 0m0.555s 00:08:42.953 07:03:54 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.953 07:03:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:42.953 ************************************ 00:08:42.953 END TEST default_locks 00:08:42.954 ************************************ 00:08:42.954 07:03:54 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:42.954 07:03:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:42.954 07:03:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.954 07:03:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:42.954 ************************************ 00:08:42.954 START TEST default_locks_via_rpc 00:08:42.954 ************************************ 00:08:42.954 07:03:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:08:42.954 07:03:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2165724 00:08:42.954 07:03:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2165724 00:08:42.954 07:03:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:42.954 07:03:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2165724 ']' 00:08:42.954 07:03:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.954 07:03:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:42.954 07:03:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.954 07:03:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:42.954 07:03:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.216 [2024-11-27 07:03:54.210553] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:08:43.216 [2024-11-27 07:03:54.210608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2165724 ] 00:08:43.216 [2024-11-27 07:03:54.293193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.216 [2024-11-27 07:03:54.324790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.157 07:03:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:44.157 07:03:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:44.157 07:03:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:44.157 07:03:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.157 07:03:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.157 07:03:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.157 07:03:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:44.157 07:03:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:44.157 07:03:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:44.157 07:03:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:44.157 07:03:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:44.157 07:03:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.157 07:03:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.157 07:03:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.157 07:03:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2165724 00:08:44.157 07:03:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2165724 00:08:44.157 07:03:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:44.417 07:03:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2165724 00:08:44.417 07:03:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2165724 ']' 00:08:44.417 07:03:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2165724 00:08:44.417 07:03:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:08:44.417 07:03:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:44.418 07:03:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2165724 00:08:44.418 07:03:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:44.418 07:03:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:44.418 07:03:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2165724' 00:08:44.418 killing process with pid 2165724 00:08:44.418 07:03:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2165724 00:08:44.418 07:03:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2165724 00:08:44.679 00:08:44.679 real 0m1.638s 00:08:44.679 user 0m1.758s 00:08:44.679 sys 0m0.571s 00:08:44.679 07:03:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.679 07:03:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.679 ************************************ 00:08:44.679 END TEST default_locks_via_rpc 00:08:44.679 ************************************ 00:08:44.679 07:03:55 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:44.679 07:03:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:44.679 07:03:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.679 07:03:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:44.679 ************************************ 00:08:44.679 START TEST non_locking_app_on_locked_coremask 00:08:44.679 ************************************ 00:08:44.679 07:03:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:08:44.679 07:03:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2166094 00:08:44.679 07:03:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2166094 /var/tmp/spdk.sock 00:08:44.679 07:03:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:44.679 07:03:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2166094 ']' 00:08:44.679 07:03:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.679 07:03:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:44.679 07:03:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.679 07:03:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:44.679 07:03:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:44.938 [2024-11-27 07:03:55.924975] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:08:44.938 [2024-11-27 07:03:55.925031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2166094 ] 00:08:44.938 [2024-11-27 07:03:56.007955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.938 [2024-11-27 07:03:56.039154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.509 07:03:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:45.509 07:03:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:45.509 07:03:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:45.509 07:03:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2166174 00:08:45.509 07:03:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2166174 /var/tmp/spdk2.sock 00:08:45.509 07:03:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2166174 ']' 00:08:45.509 07:03:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:45.509 07:03:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.509 07:03:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:45.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:45.509 07:03:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.509 07:03:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:45.770 [2024-11-27 07:03:56.742786] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:08:45.770 [2024-11-27 07:03:56.742840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2166174 ] 00:08:45.770 [2024-11-27 07:03:56.830775] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:45.770 [2024-11-27 07:03:56.830800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.770 [2024-11-27 07:03:56.893182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.341 07:03:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:46.342 07:03:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:46.342 07:03:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2166094 00:08:46.604 07:03:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2166094 00:08:46.604 07:03:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:47.177 lslocks: write error 00:08:47.177 07:03:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2166094 00:08:47.177 07:03:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2166094 ']' 00:08:47.177 07:03:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2166094 00:08:47.177 07:03:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:47.177 07:03:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:47.177 07:03:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2166094 00:08:47.177 07:03:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:47.177 07:03:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:47.177 07:03:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2166094' 00:08:47.177 killing process with pid 2166094 00:08:47.177 07:03:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2166094 00:08:47.177 07:03:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2166094 00:08:47.439 07:03:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2166174 00:08:47.439 07:03:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2166174 ']' 00:08:47.439 07:03:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2166174 00:08:47.439 07:03:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:47.439 07:03:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:47.439 07:03:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2166174 00:08:47.439 07:03:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:47.439 07:03:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:47.439 07:03:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2166174' 00:08:47.439 killing process with pid 2166174 00:08:47.439 07:03:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2166174 00:08:47.439 07:03:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2166174 00:08:47.700 00:08:47.700 real 0m2.931s 00:08:47.700 user 0m3.276s 00:08:47.700 sys 0m0.875s 00:08:47.700 07:03:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.700 07:03:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:47.700 ************************************ 00:08:47.700 END TEST non_locking_app_on_locked_coremask 00:08:47.700 ************************************ 00:08:47.700 07:03:58 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:47.700 07:03:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:47.700 07:03:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.700 07:03:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:47.700 ************************************ 00:08:47.700 START TEST locking_app_on_unlocked_coremask 00:08:47.700 ************************************ 00:08:47.700 07:03:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:08:47.700 07:03:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2166797 00:08:47.700 07:03:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2166797 /var/tmp/spdk.sock 00:08:47.700 07:03:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:47.700 07:03:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2166797 ']' 00:08:47.700 07:03:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.700 07:03:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:47.700 07:03:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.700 07:03:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:47.700 07:03:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:47.962 [2024-11-27 07:03:58.936262] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:08:47.962 [2024-11-27 07:03:58.936322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2166797 ] 00:08:47.962 [2024-11-27 07:03:59.020908] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:47.962 [2024-11-27 07:03:59.020935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.962 [2024-11-27 07:03:59.054887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.536 07:03:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:48.536 07:03:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:48.536 07:03:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2166816 00:08:48.536 07:03:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2166816 /var/tmp/spdk2.sock 00:08:48.536 07:03:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2166816 ']' 00:08:48.536 07:03:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:48.536 07:03:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:48.536 07:03:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:48.536 07:03:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:48.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:48.536 07:03:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:48.536 07:03:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:48.797 [2024-11-27 07:03:59.774914] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:08:48.797 [2024-11-27 07:03:59.774965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2166816 ] 00:08:48.797 [2024-11-27 07:03:59.860892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.797 [2024-11-27 07:03:59.923207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.370 07:04:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.370 07:04:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:49.370 07:04:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2166816 00:08:49.370 07:04:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2166816 00:08:49.370 07:04:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:49.942 lslocks: write error 00:08:49.942 07:04:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2166797 00:08:49.942 07:04:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2166797 ']' 00:08:49.942 07:04:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2166797 00:08:49.942 07:04:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:49.942 07:04:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:49.942 07:04:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2166797 00:08:50.203 07:04:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:50.203 07:04:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:50.203 07:04:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2166797' 00:08:50.203 killing process with pid 2166797 00:08:50.203 07:04:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2166797 00:08:50.203 07:04:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2166797 00:08:50.463 07:04:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2166816 00:08:50.463 07:04:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2166816 ']' 00:08:50.463 07:04:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2166816 00:08:50.463 07:04:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:50.463 07:04:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:50.463 07:04:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2166816 00:08:50.463 07:04:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:50.463 07:04:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:50.463 07:04:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2166816' 00:08:50.463 killing process with pid 2166816 00:08:50.463 07:04:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2166816 00:08:50.463 07:04:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2166816 00:08:50.724 00:08:50.724 real 0m2.936s 00:08:50.724 user 0m3.277s 00:08:50.724 sys 0m0.911s 00:08:50.724 07:04:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.724 07:04:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:50.724 ************************************ 00:08:50.724 END TEST locking_app_on_unlocked_coremask 00:08:50.724 ************************************ 00:08:50.724 07:04:01 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:50.724 07:04:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:50.724 07:04:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.724 07:04:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:50.724 ************************************ 00:08:50.724 START TEST locking_app_on_locked_coremask 00:08:50.724 ************************************ 00:08:50.724 07:04:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:08:50.724 07:04:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2167328 00:08:50.724 07:04:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2167328 /var/tmp/spdk.sock 00:08:50.724 07:04:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:50.724 07:04:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2167328 ']' 00:08:50.724 07:04:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.724 07:04:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:50.724 07:04:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.724 07:04:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:50.724 07:04:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:50.985 [2024-11-27 07:04:01.959132] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:08:50.985 [2024-11-27 07:04:01.959200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2167328 ] 00:08:50.985 [2024-11-27 07:04:02.046483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.985 [2024-11-27 07:04:02.086362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.588 07:04:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:51.588 07:04:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:51.588 07:04:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2167526 00:08:51.588 07:04:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:51.588 07:04:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2167526 /var/tmp/spdk2.sock 00:08:51.588 07:04:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:51.588 07:04:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2167526 /var/tmp/spdk2.sock 00:08:51.589 07:04:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:51.589 07:04:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:51.589 07:04:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:51.589 07:04:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:51.589 07:04:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2167526 /var/tmp/spdk2.sock 00:08:51.589 07:04:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2167526 ']' 00:08:51.589 07:04:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:51.589 07:04:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:51.589 07:04:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:51.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:51.589 07:04:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:51.589 07:04:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:51.849 [2024-11-27 07:04:02.804204] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:08:51.849 [2024-11-27 07:04:02.804257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2167526 ] 00:08:51.849 [2024-11-27 07:04:02.893255] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2167328 has claimed it. 00:08:51.849 [2024-11-27 07:04:02.893293] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:52.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2167526) - No such process 00:08:52.421 ERROR: process (pid: 2167526) is no longer running 00:08:52.421 07:04:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:52.421 07:04:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:52.421 07:04:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:52.421 07:04:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:52.421 07:04:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:52.421 07:04:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:52.421 07:04:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2167328 00:08:52.421 07:04:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2167328 00:08:52.421 07:04:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:52.992 lslocks: write error 00:08:52.992 07:04:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2167328 00:08:52.992 07:04:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2167328 ']' 00:08:52.992 07:04:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2167328 00:08:52.992 07:04:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:52.992 07:04:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:52.992 07:04:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2167328 00:08:52.992 07:04:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:52.992 07:04:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:52.993 07:04:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2167328' 00:08:52.993 killing process with pid 2167328 00:08:52.993 07:04:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2167328 00:08:52.993 07:04:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2167328 00:08:53.253 00:08:53.253 real 0m2.345s 00:08:53.253 user 0m2.642s 00:08:53.253 sys 0m0.654s 00:08:53.253 07:04:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.253 07:04:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:53.253 ************************************ 00:08:53.253 END TEST locking_app_on_locked_coremask 00:08:53.253 ************************************ 00:08:53.253 07:04:04 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:53.253 07:04:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:53.253 07:04:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.253 07:04:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:53.253 ************************************ 00:08:53.253 START TEST locking_overlapped_coremask 00:08:53.253 ************************************ 00:08:53.253 07:04:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:08:53.253 07:04:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2167887 00:08:53.253 07:04:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2167887 /var/tmp/spdk.sock 00:08:53.253 07:04:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:08:53.253 07:04:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2167887 ']' 00:08:53.254 07:04:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.254 07:04:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:53.254 07:04:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.254 07:04:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:53.254 07:04:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:53.254 [2024-11-27 07:04:04.374434] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:08:53.254 [2024-11-27 07:04:04.374485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2167887 ] 00:08:53.514 [2024-11-27 07:04:04.457525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:53.514 [2024-11-27 07:04:04.489015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.514 [2024-11-27 07:04:04.489170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.514 [2024-11-27 07:04:04.489179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:54.086 07:04:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:54.086 07:04:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:54.086 07:04:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2167968 00:08:54.086 07:04:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:54.086 07:04:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2167968 /var/tmp/spdk2.sock 00:08:54.086 07:04:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:54.086 07:04:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2167968 /var/tmp/spdk2.sock 00:08:54.086 07:04:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:54.086 07:04:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:54.086 07:04:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:54.086 07:04:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:54.086 07:04:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2167968 /var/tmp/spdk2.sock 00:08:54.086 07:04:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2167968 ']' 00:08:54.086 07:04:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:54.086 07:04:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:54.086 07:04:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:54.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:54.086 07:04:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:54.086 07:04:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:54.086 [2024-11-27 07:04:05.223722] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:08:54.086 [2024-11-27 07:04:05.223777] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2167968 ] 00:08:54.347 [2024-11-27 07:04:05.336686] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2167887 has claimed it. 00:08:54.347 [2024-11-27 07:04:05.336731] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:54.917 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2167968) - No such process 00:08:54.917 ERROR: process (pid: 2167968) is no longer running 00:08:54.918 07:04:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:54.918 07:04:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:54.918 07:04:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:54.918 07:04:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:54.918 07:04:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:54.918 07:04:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:54.918 07:04:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:54.918 07:04:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:54.918 07:04:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:54.918 07:04:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:54.918 07:04:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2167887 00:08:54.918 07:04:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2167887 ']' 00:08:54.918 07:04:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2167887 00:08:54.918 07:04:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:08:54.918 07:04:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.918 07:04:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2167887 00:08:54.918 07:04:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.918 07:04:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.918 07:04:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2167887' 00:08:54.918 killing process with pid 2167887 00:08:54.918 07:04:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2167887 00:08:54.918 07:04:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2167887 00:08:54.918 00:08:54.918 real 0m1.775s 00:08:54.918 user 0m5.148s 00:08:54.918 sys 0m0.385s 00:08:54.918 07:04:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.918 07:04:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:54.918 ************************************ 00:08:54.918 END TEST locking_overlapped_coremask 00:08:54.918 ************************************ 00:08:54.918 07:04:06 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:55.178 07:04:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:55.178 07:04:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.178 07:04:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:55.178 ************************************ 00:08:55.178 START TEST locking_overlapped_coremask_via_rpc 00:08:55.178 ************************************ 00:08:55.178 07:04:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:08:55.178 07:04:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2168261 00:08:55.178 07:04:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2168261 /var/tmp/spdk.sock 00:08:55.178 07:04:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:55.178 07:04:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2168261 ']' 00:08:55.178 07:04:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.178 07:04:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:55.178 07:04:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.178 07:04:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:55.178 07:04:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.178 [2024-11-27 07:04:06.212254] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:08:55.178 [2024-11-27 07:04:06.212301] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2168261 ] 00:08:55.179 [2024-11-27 07:04:06.297440] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:55.179 [2024-11-27 07:04:06.297471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:55.179 [2024-11-27 07:04:06.329290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.179 [2024-11-27 07:04:06.329555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.179 [2024-11-27 07:04:06.329556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:56.120 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.120 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:56.120 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2168430 00:08:56.120 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2168430 /var/tmp/spdk2.sock 00:08:56.120 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2168430 ']' 00:08:56.120 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:56.120 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:56.120 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.120 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:56.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:56.120 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.120 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.120 [2024-11-27 07:04:07.063367] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:08:56.120 [2024-11-27 07:04:07.063421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2168430 ] 00:08:56.120 [2024-11-27 07:04:07.174790] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:56.120 [2024-11-27 07:04:07.174826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:56.120 [2024-11-27 07:04:07.252665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:56.120 [2024-11-27 07:04:07.252823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:56.120 [2024-11-27 07:04:07.252824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:56.691 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.691 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:56.691 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:56.691 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.691 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.691 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.691 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:56.691 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:08:56.691 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:56.691 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:56.691 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:56.691 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:56.691 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:56.691 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:56.691 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.691 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.691 [2024-11-27 07:04:07.881237] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2168261 has claimed it. 00:08:56.691 request: 00:08:56.691 { 00:08:56.691 "method": "framework_enable_cpumask_locks", 00:08:56.691 "req_id": 1 00:08:56.691 } 00:08:56.691 Got JSON-RPC error response 00:08:56.691 response: 00:08:56.691 { 00:08:56.691 "code": -32603, 00:08:56.691 "message": "Failed to claim CPU core: 2" 00:08:56.691 } 00:08:56.691 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:56.691 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:56.691 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:56.691 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:56.691 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:56.691 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2168261 /var/tmp/spdk.sock 00:08:56.691 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2168261 ']' 00:08:56.691 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.691 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.691 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.691 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.691 07:04:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.952 07:04:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.952 07:04:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:56.952 07:04:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2168430 /var/tmp/spdk2.sock 00:08:56.952 07:04:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2168430 ']' 00:08:56.952 07:04:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:56.952 07:04:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.952 07:04:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:56.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:56.952 07:04:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.952 07:04:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.221 07:04:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:57.221 07:04:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:57.221 07:04:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:57.221 07:04:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:57.221 07:04:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:57.221 07:04:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:57.221 00:08:57.221 real 0m2.099s 00:08:57.221 user 0m0.882s 00:08:57.221 sys 0m0.143s 00:08:57.221 07:04:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.221 07:04:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.221 ************************************ 00:08:57.221 END TEST locking_overlapped_coremask_via_rpc 00:08:57.221 ************************************ 00:08:57.221 07:04:08 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:57.221 07:04:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2168261 ]] 00:08:57.221 07:04:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2168261 00:08:57.221 07:04:08 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2168261 ']' 00:08:57.221 07:04:08 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2168261 00:08:57.221 07:04:08 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:57.221 07:04:08 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:57.221 07:04:08 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2168261 00:08:57.221 07:04:08 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:57.221 07:04:08 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:57.221 07:04:08 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2168261' 00:08:57.221 killing process with pid 2168261 00:08:57.221 07:04:08 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2168261 00:08:57.221 07:04:08 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2168261 00:08:57.483 07:04:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2168430 ]] 00:08:57.483 07:04:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2168430 00:08:57.483 07:04:08 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2168430 ']' 00:08:57.483 07:04:08 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2168430 00:08:57.483 07:04:08 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:57.483 07:04:08 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:57.483 07:04:08 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2168430 00:08:57.483 07:04:08 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:57.483 07:04:08 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:57.483 07:04:08 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2168430' 00:08:57.483 killing process with pid 2168430 00:08:57.483 07:04:08 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2168430 00:08:57.483 07:04:08 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2168430 00:08:57.744 07:04:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:57.744 07:04:08 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:57.744 07:04:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2168261 ]] 00:08:57.744 07:04:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2168261 00:08:57.744 07:04:08 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2168261 ']' 00:08:57.744 07:04:08 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2168261 00:08:57.744 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2168261) - No such process 00:08:57.744 07:04:08 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2168261 is not found' 00:08:57.744 Process with pid 2168261 is not found 00:08:57.744 07:04:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2168430 ]] 00:08:57.744 07:04:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2168430 00:08:57.744 07:04:08 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2168430 ']' 00:08:57.744 07:04:08 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2168430 00:08:57.744 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2168430) - No such process 00:08:57.744 07:04:08 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2168430 is not found' 00:08:57.744 Process with pid 2168430 is not found 00:08:57.744 07:04:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:57.744 00:08:57.744 real 0m16.575s 00:08:57.744 user 0m28.761s 00:08:57.744 sys 0m5.058s 00:08:57.744 07:04:08 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.744 07:04:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:57.744 ************************************ 00:08:57.744 END TEST cpu_locks 00:08:57.744 ************************************ 00:08:57.744 00:08:57.744 real 0m42.501s 00:08:57.744 user 1m23.830s 00:08:57.744 sys 0m8.443s 00:08:57.744 07:04:08 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.744 07:04:08 event -- common/autotest_common.sh@10 -- # set +x 00:08:57.744 ************************************ 00:08:57.744 END TEST event 00:08:57.744 ************************************ 00:08:57.744 07:04:08 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:57.744 07:04:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:57.744 07:04:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.744 07:04:08 -- common/autotest_common.sh@10 -- # set +x 00:08:57.744 ************************************ 00:08:57.744 START TEST thread 00:08:57.744 ************************************ 00:08:57.744 07:04:08 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:58.007 * Looking for test storage... 00:08:58.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:08:58.007 07:04:09 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:58.007 07:04:09 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:08:58.007 07:04:09 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:58.007 07:04:09 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:58.007 07:04:09 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:58.007 07:04:09 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:58.007 07:04:09 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:58.007 07:04:09 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.007 07:04:09 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:58.007 07:04:09 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:58.007 07:04:09 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:58.007 07:04:09 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:58.007 07:04:09 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:58.007 07:04:09 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:58.007 07:04:09 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:58.007 07:04:09 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:58.007 07:04:09 thread -- scripts/common.sh@345 -- # : 1 00:08:58.007 07:04:09 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:58.007 07:04:09 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.007 07:04:09 thread -- scripts/common.sh@365 -- # decimal 1 00:08:58.007 07:04:09 thread -- scripts/common.sh@353 -- # local d=1 00:08:58.007 07:04:09 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.007 07:04:09 thread -- scripts/common.sh@355 -- # echo 1 00:08:58.007 07:04:09 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:58.007 07:04:09 thread -- scripts/common.sh@366 -- # decimal 2 00:08:58.007 07:04:09 thread -- scripts/common.sh@353 -- # local d=2 00:08:58.007 07:04:09 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.007 07:04:09 thread -- scripts/common.sh@355 -- # echo 2 00:08:58.007 07:04:09 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:58.007 07:04:09 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:58.007 07:04:09 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:58.007 07:04:09 thread -- scripts/common.sh@368 -- # return 0 00:08:58.007 07:04:09 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.007 07:04:09 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:58.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.007 --rc genhtml_branch_coverage=1 00:08:58.007 --rc genhtml_function_coverage=1 00:08:58.007 --rc genhtml_legend=1 00:08:58.007 --rc geninfo_all_blocks=1 00:08:58.007 --rc geninfo_unexecuted_blocks=1 00:08:58.007 00:08:58.007 ' 00:08:58.007 07:04:09 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:58.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.007 --rc genhtml_branch_coverage=1 00:08:58.007 --rc genhtml_function_coverage=1 00:08:58.007 --rc genhtml_legend=1 00:08:58.007 --rc geninfo_all_blocks=1 00:08:58.007 --rc geninfo_unexecuted_blocks=1 00:08:58.007 00:08:58.007 ' 00:08:58.007 07:04:09 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:58.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.007 --rc genhtml_branch_coverage=1 00:08:58.007 --rc genhtml_function_coverage=1 00:08:58.007 --rc genhtml_legend=1 00:08:58.007 --rc geninfo_all_blocks=1 00:08:58.007 --rc geninfo_unexecuted_blocks=1 00:08:58.007 00:08:58.007 ' 00:08:58.007 07:04:09 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:58.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.007 --rc genhtml_branch_coverage=1 00:08:58.007 --rc genhtml_function_coverage=1 00:08:58.007 --rc genhtml_legend=1 00:08:58.007 --rc geninfo_all_blocks=1 00:08:58.007 --rc geninfo_unexecuted_blocks=1 00:08:58.007 00:08:58.007 ' 00:08:58.007 07:04:09 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:58.007 07:04:09 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:58.007 07:04:09 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.007 07:04:09 thread -- common/autotest_common.sh@10 -- # set +x 00:08:58.007 ************************************ 00:08:58.007 START TEST thread_poller_perf 00:08:58.007 ************************************ 00:08:58.007 07:04:09 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:58.007 [2024-11-27 07:04:09.198540] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:08:58.007 [2024-11-27 07:04:09.198653] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2169045 ] 00:08:58.272 [2024-11-27 07:04:09.288183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.272 [2024-11-27 07:04:09.319828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.272 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:59.215 [2024-11-27T06:04:10.420Z] ====================================== 00:08:59.215 [2024-11-27T06:04:10.420Z] busy:2405697736 (cyc) 00:08:59.215 [2024-11-27T06:04:10.420Z] total_run_count: 414000 00:08:59.215 [2024-11-27T06:04:10.420Z] tsc_hz: 2400000000 (cyc) 00:08:59.215 [2024-11-27T06:04:10.420Z] ====================================== 00:08:59.215 [2024-11-27T06:04:10.420Z] poller_cost: 5810 (cyc), 2420 (nsec) 00:08:59.215 00:08:59.215 real 0m1.176s 00:08:59.215 user 0m1.090s 00:08:59.215 sys 0m0.082s 00:08:59.215 07:04:10 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.215 07:04:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:59.215 ************************************ 00:08:59.215 END TEST thread_poller_perf 00:08:59.215 ************************************ 00:08:59.215 07:04:10 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:59.215 07:04:10 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:59.215 07:04:10 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.215 07:04:10 thread -- common/autotest_common.sh@10 -- # set +x 00:08:59.476 ************************************ 00:08:59.476 START TEST thread_poller_perf 00:08:59.476 ************************************ 00:08:59.476 07:04:10 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:59.477 [2024-11-27 07:04:10.455195] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:08:59.477 [2024-11-27 07:04:10.455302] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2169248 ] 00:08:59.477 [2024-11-27 07:04:10.542183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.477 [2024-11-27 07:04:10.578388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.477 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:00.419 [2024-11-27T06:04:11.624Z] ====================================== 00:09:00.419 [2024-11-27T06:04:11.624Z] busy:2401661448 (cyc) 00:09:00.419 [2024-11-27T06:04:11.624Z] total_run_count: 5566000 00:09:00.419 [2024-11-27T06:04:11.624Z] tsc_hz: 2400000000 (cyc) 00:09:00.419 [2024-11-27T06:04:11.624Z] ====================================== 00:09:00.419 [2024-11-27T06:04:11.624Z] poller_cost: 431 (cyc), 179 (nsec) 00:09:00.419 00:09:00.419 real 0m1.172s 00:09:00.419 user 0m1.091s 00:09:00.420 sys 0m0.077s 00:09:00.420 07:04:11 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.420 07:04:11 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:00.420 ************************************ 00:09:00.420 END TEST thread_poller_perf 00:09:00.420 ************************************ 00:09:00.681 07:04:11 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:00.682 00:09:00.682 real 0m2.709s 00:09:00.682 user 0m2.346s 00:09:00.682 sys 0m0.377s 00:09:00.682 07:04:11 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.682 07:04:11 thread -- common/autotest_common.sh@10 -- # set +x 00:09:00.682 ************************************ 00:09:00.682 END TEST thread 00:09:00.682 ************************************ 00:09:00.682 07:04:11 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:09:00.682 07:04:11 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:09:00.682 07:04:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:00.682 07:04:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.682 07:04:11 -- common/autotest_common.sh@10 -- # set +x 00:09:00.682 ************************************ 00:09:00.682 START TEST app_cmdline 00:09:00.682 ************************************ 00:09:00.682 07:04:11 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:09:00.682 * Looking for test storage... 00:09:00.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:00.682 07:04:11 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:00.682 07:04:11 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:09:00.682 07:04:11 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:00.944 07:04:11 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:00.944 07:04:11 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:00.944 07:04:11 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:00.944 07:04:11 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:00.944 07:04:11 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:09:00.944 07:04:11 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:09:00.944 07:04:11 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:09:00.944 07:04:11 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:09:00.944 07:04:11 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:09:00.944 07:04:11 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:09:00.944 07:04:11 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:09:00.944 07:04:11 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:00.944 07:04:11 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:09:00.944 07:04:11 app_cmdline -- scripts/common.sh@345 -- # : 1 00:09:00.944 07:04:11 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:00.944 07:04:11 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:00.944 07:04:11 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:09:00.944 07:04:11 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:09:00.944 07:04:11 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:00.944 07:04:11 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:09:00.944 07:04:11 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:09:00.944 07:04:11 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:09:00.944 07:04:11 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:09:00.944 07:04:11 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:00.944 07:04:11 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:09:00.944 07:04:11 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:09:00.944 07:04:11 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:00.944 07:04:11 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:00.944 07:04:11 app_cmdline -- scripts/common.sh@368 -- # return 0 00:09:00.944 07:04:11 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:00.944 07:04:11 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:00.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.944 --rc genhtml_branch_coverage=1 00:09:00.944 --rc genhtml_function_coverage=1 00:09:00.944 --rc genhtml_legend=1 00:09:00.944 --rc geninfo_all_blocks=1 00:09:00.944 --rc geninfo_unexecuted_blocks=1 00:09:00.944 00:09:00.944 ' 00:09:00.944 07:04:11 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:00.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.944 --rc genhtml_branch_coverage=1 00:09:00.944 --rc genhtml_function_coverage=1 00:09:00.944 --rc genhtml_legend=1 00:09:00.944 --rc geninfo_all_blocks=1 00:09:00.944 --rc geninfo_unexecuted_blocks=1 00:09:00.944 00:09:00.944 ' 00:09:00.944 07:04:11 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:00.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.944 --rc genhtml_branch_coverage=1 00:09:00.944 --rc genhtml_function_coverage=1 00:09:00.944 --rc genhtml_legend=1 00:09:00.944 --rc geninfo_all_blocks=1 00:09:00.944 --rc geninfo_unexecuted_blocks=1 00:09:00.944 00:09:00.944 ' 00:09:00.944 07:04:11 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:00.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.944 --rc genhtml_branch_coverage=1 00:09:00.944 --rc genhtml_function_coverage=1 00:09:00.944 --rc genhtml_legend=1 00:09:00.944 --rc geninfo_all_blocks=1 00:09:00.944 --rc geninfo_unexecuted_blocks=1 00:09:00.944 00:09:00.944 ' 00:09:00.944 07:04:11 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:00.944 07:04:11 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2169537 00:09:00.944 07:04:11 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2169537 00:09:00.945 07:04:11 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2169537 ']' 00:09:00.945 07:04:11 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:00.945 07:04:11 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.945 07:04:11 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:00.945 07:04:11 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.945 07:04:11 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:00.945 07:04:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:00.945 [2024-11-27 07:04:11.984364] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:09:00.945 [2024-11-27 07:04:11.984442] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2169537 ] 00:09:00.945 [2024-11-27 07:04:12.071320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.945 [2024-11-27 07:04:12.106264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.888 07:04:12 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:01.888 07:04:12 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:09:01.888 07:04:12 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:09:01.888 { 00:09:01.888 "version": "SPDK v25.01-pre git sha1 4915847b4", 00:09:01.888 "fields": { 00:09:01.888 "major": 25, 00:09:01.888 "minor": 1, 00:09:01.888 "patch": 0, 00:09:01.888 "suffix": "-pre", 00:09:01.888 "commit": "4915847b4" 00:09:01.888 } 00:09:01.888 } 00:09:01.888 07:04:12 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:01.888 07:04:12 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:01.888 07:04:12 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:01.888 07:04:12 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:01.888 07:04:12 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:01.888 07:04:12 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:01.888 07:04:12 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:01.888 07:04:12 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.888 07:04:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:01.888 07:04:12 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.888 07:04:12 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:01.888 07:04:12 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:01.888 07:04:12 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:01.888 07:04:12 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:09:01.888 07:04:12 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:01.888 07:04:12 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:01.888 07:04:12 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:01.888 07:04:12 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:01.888 07:04:12 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:01.888 07:04:12 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:01.888 07:04:12 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:01.888 07:04:12 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:01.889 07:04:12 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:01.889 07:04:12 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:02.149 request: 00:09:02.149 { 00:09:02.149 "method": "env_dpdk_get_mem_stats", 00:09:02.149 "req_id": 1 00:09:02.149 } 00:09:02.149 Got JSON-RPC error response 00:09:02.149 response: 00:09:02.149 { 00:09:02.149 "code": -32601, 00:09:02.149 "message": "Method not found" 00:09:02.149 } 00:09:02.149 07:04:13 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:09:02.149 07:04:13 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:02.149 07:04:13 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:02.149 07:04:13 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:02.149 07:04:13 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2169537 00:09:02.149 07:04:13 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2169537 ']' 00:09:02.149 07:04:13 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2169537 00:09:02.149 07:04:13 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:09:02.149 07:04:13 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:02.149 07:04:13 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2169537 00:09:02.149 07:04:13 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:02.149 07:04:13 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:02.149 07:04:13 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2169537' 00:09:02.149 killing process with pid 2169537 00:09:02.149 07:04:13 app_cmdline -- common/autotest_common.sh@973 -- # kill 2169537 00:09:02.149 07:04:13 app_cmdline -- common/autotest_common.sh@978 -- # wait 2169537 00:09:02.410 00:09:02.410 real 0m1.652s 00:09:02.410 user 0m1.964s 00:09:02.410 sys 0m0.435s 00:09:02.410 07:04:13 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.410 07:04:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:02.410 ************************************ 00:09:02.410 END TEST app_cmdline 00:09:02.411 ************************************ 00:09:02.411 07:04:13 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:09:02.411 07:04:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:02.411 07:04:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.411 07:04:13 -- common/autotest_common.sh@10 -- # set +x 00:09:02.411 ************************************ 00:09:02.411 START TEST version 00:09:02.411 ************************************ 00:09:02.411 07:04:13 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:09:02.411 * Looking for test storage... 00:09:02.411 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:02.411 07:04:13 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:02.411 07:04:13 version -- common/autotest_common.sh@1693 -- # lcov --version 00:09:02.411 07:04:13 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:02.672 07:04:13 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:02.672 07:04:13 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:02.672 07:04:13 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:02.672 07:04:13 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:02.672 07:04:13 version -- scripts/common.sh@336 -- # IFS=.-: 00:09:02.672 07:04:13 version -- scripts/common.sh@336 -- # read -ra ver1 00:09:02.672 07:04:13 version -- scripts/common.sh@337 -- # IFS=.-: 00:09:02.672 07:04:13 version -- scripts/common.sh@337 -- # read -ra ver2 00:09:02.672 07:04:13 version -- scripts/common.sh@338 -- # local 'op=<' 00:09:02.672 07:04:13 version -- scripts/common.sh@340 -- # ver1_l=2 00:09:02.672 07:04:13 version -- scripts/common.sh@341 -- # ver2_l=1 00:09:02.672 07:04:13 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:02.672 07:04:13 version -- scripts/common.sh@344 -- # case "$op" in 00:09:02.672 07:04:13 version -- scripts/common.sh@345 -- # : 1 00:09:02.672 07:04:13 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:02.672 07:04:13 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:02.672 07:04:13 version -- scripts/common.sh@365 -- # decimal 1 00:09:02.672 07:04:13 version -- scripts/common.sh@353 -- # local d=1 00:09:02.672 07:04:13 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:02.672 07:04:13 version -- scripts/common.sh@355 -- # echo 1 00:09:02.672 07:04:13 version -- scripts/common.sh@365 -- # ver1[v]=1 00:09:02.672 07:04:13 version -- scripts/common.sh@366 -- # decimal 2 00:09:02.672 07:04:13 version -- scripts/common.sh@353 -- # local d=2 00:09:02.672 07:04:13 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:02.672 07:04:13 version -- scripts/common.sh@355 -- # echo 2 00:09:02.672 07:04:13 version -- scripts/common.sh@366 -- # ver2[v]=2 00:09:02.672 07:04:13 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:02.672 07:04:13 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:02.672 07:04:13 version -- scripts/common.sh@368 -- # return 0 00:09:02.672 07:04:13 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:02.672 07:04:13 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:02.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.672 --rc genhtml_branch_coverage=1 00:09:02.672 --rc genhtml_function_coverage=1 00:09:02.672 --rc genhtml_legend=1 00:09:02.672 --rc geninfo_all_blocks=1 00:09:02.672 --rc geninfo_unexecuted_blocks=1 00:09:02.672 00:09:02.672 ' 00:09:02.672 07:04:13 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:02.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.672 --rc genhtml_branch_coverage=1 00:09:02.672 --rc genhtml_function_coverage=1 00:09:02.672 --rc genhtml_legend=1 00:09:02.672 --rc geninfo_all_blocks=1 00:09:02.672 --rc geninfo_unexecuted_blocks=1 00:09:02.672 00:09:02.672 ' 00:09:02.672 07:04:13 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:02.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.672 --rc genhtml_branch_coverage=1 00:09:02.672 --rc genhtml_function_coverage=1 00:09:02.672 --rc genhtml_legend=1 00:09:02.672 --rc geninfo_all_blocks=1 00:09:02.672 --rc geninfo_unexecuted_blocks=1 00:09:02.672 00:09:02.672 ' 00:09:02.672 07:04:13 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:02.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.672 --rc genhtml_branch_coverage=1 00:09:02.672 --rc genhtml_function_coverage=1 00:09:02.672 --rc genhtml_legend=1 00:09:02.672 --rc geninfo_all_blocks=1 00:09:02.673 --rc geninfo_unexecuted_blocks=1 00:09:02.673 00:09:02.673 ' 00:09:02.673 07:04:13 version -- app/version.sh@17 -- # get_header_version major 00:09:02.673 07:04:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:02.673 07:04:13 version -- app/version.sh@14 -- # cut -f2 00:09:02.673 07:04:13 version -- app/version.sh@14 -- # tr -d '"' 00:09:02.673 07:04:13 version -- app/version.sh@17 -- # major=25 00:09:02.673 07:04:13 version -- app/version.sh@18 -- # get_header_version minor 00:09:02.673 07:04:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:02.673 07:04:13 version -- app/version.sh@14 -- # cut -f2 00:09:02.673 07:04:13 version -- app/version.sh@14 -- # tr -d '"' 00:09:02.673 07:04:13 version -- app/version.sh@18 -- # minor=1 00:09:02.673 07:04:13 version -- app/version.sh@19 -- # get_header_version patch 00:09:02.673 07:04:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:02.673 07:04:13 version -- app/version.sh@14 -- # cut -f2 00:09:02.673 07:04:13 version -- app/version.sh@14 -- # tr -d '"' 00:09:02.673 07:04:13 version -- app/version.sh@19 -- # patch=0 00:09:02.673 07:04:13 version -- app/version.sh@20 -- # get_header_version suffix 00:09:02.673 07:04:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:02.673 07:04:13 version -- app/version.sh@14 -- # cut -f2 00:09:02.673 07:04:13 version -- app/version.sh@14 -- # tr -d '"' 00:09:02.673 07:04:13 version -- app/version.sh@20 -- # suffix=-pre 00:09:02.673 07:04:13 version -- app/version.sh@22 -- # version=25.1 00:09:02.673 07:04:13 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:02.673 07:04:13 version -- app/version.sh@28 -- # version=25.1rc0 00:09:02.673 07:04:13 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:02.673 07:04:13 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:02.673 07:04:13 version -- app/version.sh@30 -- # py_version=25.1rc0 00:09:02.673 07:04:13 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:09:02.673 00:09:02.673 real 0m0.282s 00:09:02.673 user 0m0.166s 00:09:02.673 sys 0m0.166s 00:09:02.673 07:04:13 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.673 07:04:13 version -- common/autotest_common.sh@10 -- # set +x 00:09:02.673 ************************************ 00:09:02.673 END TEST version 00:09:02.673 ************************************ 00:09:02.673 07:04:13 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:09:02.673 07:04:13 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:09:02.673 07:04:13 -- spdk/autotest.sh@194 -- # uname -s 00:09:02.673 07:04:13 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:09:02.673 07:04:13 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:02.673 07:04:13 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:02.673 07:04:13 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:09:02.673 07:04:13 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:09:02.673 07:04:13 -- spdk/autotest.sh@260 -- # timing_exit lib 00:09:02.673 07:04:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:02.673 07:04:13 -- common/autotest_common.sh@10 -- # set +x 00:09:02.673 07:04:13 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:09:02.673 07:04:13 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:09:02.673 07:04:13 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:09:02.673 07:04:13 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:09:02.673 07:04:13 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:09:02.673 07:04:13 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:09:02.673 07:04:13 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:02.673 07:04:13 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:02.673 07:04:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.673 07:04:13 -- common/autotest_common.sh@10 -- # set +x 00:09:02.673 ************************************ 00:09:02.673 START TEST nvmf_tcp 00:09:02.673 ************************************ 00:09:02.673 07:04:13 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:02.935 * Looking for test storage... 00:09:02.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:02.935 07:04:13 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:02.935 07:04:13 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:02.935 07:04:13 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:02.935 07:04:14 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:02.935 07:04:14 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:02.935 07:04:14 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:02.935 07:04:14 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:02.935 07:04:14 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:02.935 07:04:14 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:02.935 07:04:14 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:02.935 07:04:14 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:02.935 07:04:14 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:02.935 07:04:14 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:02.935 07:04:14 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:02.935 07:04:14 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:02.935 07:04:14 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:02.935 07:04:14 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:09:02.935 07:04:14 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:02.935 07:04:14 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:02.935 07:04:14 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:02.935 07:04:14 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:09:02.935 07:04:14 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:02.935 07:04:14 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:09:02.935 07:04:14 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:02.935 07:04:14 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:02.935 07:04:14 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:09:02.935 07:04:14 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:02.935 07:04:14 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:09:02.935 07:04:14 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:02.935 07:04:14 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:02.935 07:04:14 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:02.935 07:04:14 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:09:02.935 07:04:14 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:02.935 07:04:14 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:02.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.935 --rc genhtml_branch_coverage=1 00:09:02.935 --rc genhtml_function_coverage=1 00:09:02.935 --rc genhtml_legend=1 00:09:02.935 --rc geninfo_all_blocks=1 00:09:02.935 --rc geninfo_unexecuted_blocks=1 00:09:02.935 00:09:02.935 ' 00:09:02.935 07:04:14 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:02.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.935 --rc genhtml_branch_coverage=1 00:09:02.935 --rc genhtml_function_coverage=1 00:09:02.935 --rc genhtml_legend=1 00:09:02.935 --rc geninfo_all_blocks=1 00:09:02.935 --rc geninfo_unexecuted_blocks=1 00:09:02.935 00:09:02.935 ' 00:09:02.935 07:04:14 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:02.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.935 --rc genhtml_branch_coverage=1 00:09:02.935 --rc genhtml_function_coverage=1 00:09:02.935 --rc genhtml_legend=1 00:09:02.935 --rc geninfo_all_blocks=1 00:09:02.935 --rc geninfo_unexecuted_blocks=1 00:09:02.935 00:09:02.935 ' 00:09:02.935 07:04:14 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:02.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.935 --rc genhtml_branch_coverage=1 00:09:02.936 --rc genhtml_function_coverage=1 00:09:02.936 --rc genhtml_legend=1 00:09:02.936 --rc geninfo_all_blocks=1 00:09:02.936 --rc geninfo_unexecuted_blocks=1 00:09:02.936 00:09:02.936 ' 00:09:02.936 07:04:14 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:09:02.936 07:04:14 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:02.936 07:04:14 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:02.936 07:04:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:02.936 07:04:14 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.936 07:04:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:02.936 ************************************ 00:09:02.936 START TEST nvmf_target_core 00:09:02.936 ************************************ 00:09:02.936 07:04:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:03.198 * Looking for test storage... 00:09:03.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:03.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.198 --rc genhtml_branch_coverage=1 00:09:03.198 --rc genhtml_function_coverage=1 00:09:03.198 --rc genhtml_legend=1 00:09:03.198 --rc geninfo_all_blocks=1 00:09:03.198 --rc geninfo_unexecuted_blocks=1 00:09:03.198 00:09:03.198 ' 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:03.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.198 --rc genhtml_branch_coverage=1 00:09:03.198 --rc genhtml_function_coverage=1 00:09:03.198 --rc genhtml_legend=1 00:09:03.198 --rc geninfo_all_blocks=1 00:09:03.198 --rc geninfo_unexecuted_blocks=1 00:09:03.198 00:09:03.198 ' 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:03.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.198 --rc genhtml_branch_coverage=1 00:09:03.198 --rc genhtml_function_coverage=1 00:09:03.198 --rc genhtml_legend=1 00:09:03.198 --rc geninfo_all_blocks=1 00:09:03.198 --rc geninfo_unexecuted_blocks=1 00:09:03.198 00:09:03.198 ' 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:03.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.198 --rc genhtml_branch_coverage=1 00:09:03.198 --rc genhtml_function_coverage=1 00:09:03.198 --rc genhtml_legend=1 00:09:03.198 --rc geninfo_all_blocks=1 00:09:03.198 --rc geninfo_unexecuted_blocks=1 00:09:03.198 00:09:03.198 ' 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:03.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.198 07:04:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:03.199 ************************************ 00:09:03.199 START TEST nvmf_abort 00:09:03.199 ************************************ 00:09:03.199 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:03.460 * Looking for test storage... 00:09:03.460 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.460 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:03.460 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:09:03.460 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:03.460 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:03.460 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:03.460 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:03.460 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:03.460 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:09:03.460 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:09:03.460 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:09:03.460 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:09:03.460 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:09:03.460 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:09:03.460 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:09:03.460 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:03.460 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:09:03.460 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:09:03.460 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:03.460 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:03.460 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:09:03.460 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:09:03.460 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:03.460 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:09:03.460 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:09:03.460 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:09:03.460 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:09:03.460 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:03.460 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:09:03.460 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:09:03.460 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:03.460 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:03.460 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:09:03.460 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:03.460 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:03.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.460 --rc genhtml_branch_coverage=1 00:09:03.460 --rc genhtml_function_coverage=1 00:09:03.460 --rc genhtml_legend=1 00:09:03.460 --rc geninfo_all_blocks=1 00:09:03.460 --rc geninfo_unexecuted_blocks=1 00:09:03.460 00:09:03.460 ' 00:09:03.460 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:03.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.460 --rc genhtml_branch_coverage=1 00:09:03.460 --rc genhtml_function_coverage=1 00:09:03.460 --rc genhtml_legend=1 00:09:03.460 --rc geninfo_all_blocks=1 00:09:03.460 --rc geninfo_unexecuted_blocks=1 00:09:03.460 00:09:03.460 ' 00:09:03.460 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:03.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.460 --rc genhtml_branch_coverage=1 00:09:03.461 --rc genhtml_function_coverage=1 00:09:03.461 --rc genhtml_legend=1 00:09:03.461 --rc geninfo_all_blocks=1 00:09:03.461 --rc geninfo_unexecuted_blocks=1 00:09:03.461 00:09:03.461 ' 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:03.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.461 --rc genhtml_branch_coverage=1 00:09:03.461 --rc genhtml_function_coverage=1 00:09:03.461 --rc genhtml_legend=1 00:09:03.461 --rc geninfo_all_blocks=1 00:09:03.461 --rc geninfo_unexecuted_blocks=1 00:09:03.461 00:09:03.461 ' 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:03.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:09:03.461 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:11.600 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:11.600 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.600 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:11.601 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:11.601 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:11.601 07:04:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:11.601 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:11.601 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:11.601 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:11.601 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:11.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:11.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:09:11.601 00:09:11.601 --- 10.0.0.2 ping statistics --- 00:09:11.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.601 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:09:11.601 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:11.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:11.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.354 ms 00:09:11.601 00:09:11.601 --- 10.0.0.1 ping statistics --- 00:09:11.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.601 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:09:11.601 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:11.601 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:09:11.601 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:11.601 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:11.601 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:11.601 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:11.601 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:11.601 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:11.601 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:11.601 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:11.601 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:11.601 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:11.601 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:11.601 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2173967 00:09:11.601 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2173967 00:09:11.601 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:11.601 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2173967 ']' 00:09:11.601 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.601 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:11.601 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.601 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:11.601 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:11.601 [2024-11-27 07:04:22.172609] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:09:11.601 [2024-11-27 07:04:22.172674] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:11.601 [2024-11-27 07:04:22.272523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:11.601 [2024-11-27 07:04:22.327345] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:11.601 [2024-11-27 07:04:22.327397] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:11.601 [2024-11-27 07:04:22.327407] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:11.601 [2024-11-27 07:04:22.327414] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:11.601 [2024-11-27 07:04:22.327420] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:11.601 [2024-11-27 07:04:22.329191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:11.601 [2024-11-27 07:04:22.329415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.601 [2024-11-27 07:04:22.329416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:11.861 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:11.861 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:09:11.861 07:04:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:11.861 07:04:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:11.861 07:04:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:11.861 07:04:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:11.861 07:04:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:11.861 07:04:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.861 07:04:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:11.861 [2024-11-27 07:04:23.050350] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:11.861 07:04:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.861 07:04:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:11.861 07:04:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.861 07:04:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:12.127 Malloc0 00:09:12.127 07:04:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.127 07:04:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:12.127 07:04:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.127 07:04:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:12.127 Delay0 00:09:12.127 07:04:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.127 07:04:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:12.127 07:04:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.127 07:04:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:12.127 07:04:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.127 07:04:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:12.127 07:04:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.127 07:04:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:12.127 07:04:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.127 07:04:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:12.127 07:04:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.127 07:04:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:12.127 [2024-11-27 07:04:23.132565] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:12.127 07:04:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.127 07:04:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:12.127 07:04:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.127 07:04:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:12.127 07:04:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.127 07:04:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:12.127 [2024-11-27 07:04:23.282833] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:14.790 Initializing NVMe Controllers 00:09:14.790 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:14.790 controller IO queue size 128 less than required 00:09:14.790 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:14.790 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:14.790 Initialization complete. Launching workers. 00:09:14.790 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28271 00:09:14.790 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28332, failed to submit 62 00:09:14.790 success 28275, unsuccessful 57, failed 0 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:14.790 rmmod nvme_tcp 00:09:14.790 rmmod nvme_fabrics 00:09:14.790 rmmod nvme_keyring 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2173967 ']' 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2173967 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2173967 ']' 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2173967 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2173967 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2173967' 00:09:14.790 killing process with pid 2173967 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2173967 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2173967 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.790 07:04:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.705 07:04:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:16.705 00:09:16.705 real 0m13.408s 00:09:16.705 user 0m14.228s 00:09:16.705 sys 0m6.563s 00:09:16.705 07:04:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.705 07:04:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:16.705 ************************************ 00:09:16.705 END TEST nvmf_abort 00:09:16.705 ************************************ 00:09:16.705 07:04:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:16.705 07:04:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:16.705 07:04:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.705 07:04:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:16.705 ************************************ 00:09:16.705 START TEST nvmf_ns_hotplug_stress 00:09:16.705 ************************************ 00:09:16.705 07:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:16.967 * Looking for test storage... 00:09:16.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:16.967 07:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:16.967 07:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:09:16.967 07:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:16.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.967 --rc genhtml_branch_coverage=1 00:09:16.967 --rc genhtml_function_coverage=1 00:09:16.967 --rc genhtml_legend=1 00:09:16.967 --rc geninfo_all_blocks=1 00:09:16.967 --rc geninfo_unexecuted_blocks=1 00:09:16.967 00:09:16.967 ' 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:16.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.967 --rc genhtml_branch_coverage=1 00:09:16.967 --rc genhtml_function_coverage=1 00:09:16.967 --rc genhtml_legend=1 00:09:16.967 --rc geninfo_all_blocks=1 00:09:16.967 --rc geninfo_unexecuted_blocks=1 00:09:16.967 00:09:16.967 ' 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:16.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.967 --rc genhtml_branch_coverage=1 00:09:16.967 --rc genhtml_function_coverage=1 00:09:16.967 --rc genhtml_legend=1 00:09:16.967 --rc geninfo_all_blocks=1 00:09:16.967 --rc geninfo_unexecuted_blocks=1 00:09:16.967 00:09:16.967 ' 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:16.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.967 --rc genhtml_branch_coverage=1 00:09:16.967 --rc genhtml_function_coverage=1 00:09:16.967 --rc genhtml_legend=1 00:09:16.967 --rc geninfo_all_blocks=1 00:09:16.967 --rc geninfo_unexecuted_blocks=1 00:09:16.967 00:09:16.967 ' 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:16.967 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.968 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:16.968 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:09:16.968 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.968 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.968 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.968 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.968 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.968 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.968 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:16.968 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.968 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:09:16.968 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:16.968 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:16.968 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:16.968 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.968 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.968 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:16.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:16.968 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:16.968 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:16.968 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:16.968 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:16.968 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:16.968 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:16.968 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:16.968 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:16.968 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:16.968 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:16.968 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.968 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.968 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.968 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:16.968 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:16.968 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:09:16.968 07:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:25.112 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:25.112 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:09:25.112 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:25.112 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:25.112 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:25.112 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:25.112 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:25.112 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:09:25.112 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:25.112 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:09:25.112 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:09:25.112 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:09:25.112 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:09:25.112 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:09:25.112 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:09:25.112 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:25.112 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:25.112 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:25.112 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:25.112 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:25.112 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:25.112 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:25.112 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:25.113 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:25.113 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:25.113 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:25.113 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:25.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:25.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:09:25.113 00:09:25.113 --- 10.0.0.2 ping statistics --- 00:09:25.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.113 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:09:25.113 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:25.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:25.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:09:25.114 00:09:25.114 --- 10.0.0.1 ping statistics --- 00:09:25.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.114 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:09:25.114 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:25.114 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:09:25.114 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:25.114 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:25.114 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:25.114 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:25.114 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:25.114 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:25.114 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:25.114 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:25.114 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:25.114 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:25.114 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:25.114 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2179020 00:09:25.114 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2179020 00:09:25.114 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:25.114 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2179020 ']' 00:09:25.114 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.114 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:25.114 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.114 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:25.114 07:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:25.114 [2024-11-27 07:04:35.706303] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:09:25.114 [2024-11-27 07:04:35.706375] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.114 [2024-11-27 07:04:35.805836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:25.114 [2024-11-27 07:04:35.857094] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:25.114 [2024-11-27 07:04:35.857144] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:25.114 [2024-11-27 07:04:35.857153] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:25.114 [2024-11-27 07:04:35.857171] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:25.114 [2024-11-27 07:04:35.857177] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:25.114 [2024-11-27 07:04:35.858973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:25.114 [2024-11-27 07:04:35.859136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:25.114 [2024-11-27 07:04:35.859136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:25.376 07:04:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:25.376 07:04:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:09:25.376 07:04:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:25.376 07:04:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:25.376 07:04:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:25.637 07:04:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:25.637 07:04:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:25.637 07:04:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:25.637 [2024-11-27 07:04:36.752134] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:25.637 07:04:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:25.898 07:04:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:26.159 [2024-11-27 07:04:37.155279] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:26.159 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:26.419 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:26.419 Malloc0 00:09:26.419 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:26.680 Delay0 00:09:26.680 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:26.941 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:27.202 NULL1 00:09:27.202 07:04:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:27.202 07:04:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2179576 00:09:27.202 07:04:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:27.202 07:04:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:27.202 07:04:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.463 07:04:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:27.723 07:04:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:27.723 07:04:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:27.983 true 00:09:27.983 07:04:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:27.983 07:04:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.983 07:04:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:28.244 07:04:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:28.244 07:04:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:28.504 true 00:09:28.504 07:04:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:28.504 07:04:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.764 07:04:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:28.764 07:04:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:28.764 07:04:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:29.024 true 00:09:29.024 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:29.024 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:29.284 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:29.284 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:29.284 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:29.545 true 00:09:29.545 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:29.545 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:29.805 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:30.066 07:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:30.066 07:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:30.066 true 00:09:30.066 07:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:30.066 07:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.326 07:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:30.587 07:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:30.587 07:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:30.587 true 00:09:30.587 07:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:30.587 07:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.848 07:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:31.108 07:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:31.108 07:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:31.108 true 00:09:31.108 07:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:31.108 07:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:31.368 07:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:31.628 07:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:31.628 07:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:31.628 true 00:09:31.889 07:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:31.889 07:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:31.889 07:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:32.151 07:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:32.151 07:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:32.413 true 00:09:32.413 07:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:32.413 07:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.413 07:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:32.673 07:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:32.673 07:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:32.935 true 00:09:32.935 07:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:32.935 07:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.935 07:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.196 07:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:33.196 07:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:33.458 true 00:09:33.458 07:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:33.458 07:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.721 07:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.721 07:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:33.721 07:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:33.982 true 00:09:33.982 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:33.982 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.243 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:34.243 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:34.243 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:34.504 true 00:09:34.504 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:34.504 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.764 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:34.764 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:34.764 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:35.025 true 00:09:35.025 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:35.025 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.285 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:35.545 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:35.545 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:35.545 true 00:09:35.545 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:35.545 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.806 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.066 07:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:36.066 07:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:36.066 true 00:09:36.066 07:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:36.066 07:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.326 07:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.586 07:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:36.586 07:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:36.586 true 00:09:36.846 07:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:36.846 07:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.846 07:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.106 07:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:37.107 07:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:37.107 true 00:09:37.367 07:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:37.367 07:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.367 07:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.627 07:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:37.627 07:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:37.889 true 00:09:37.889 07:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:37.889 07:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.889 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:38.149 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:38.149 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:38.410 true 00:09:38.410 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:38.410 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.670 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:38.670 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:38.670 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:38.932 true 00:09:38.932 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:38.932 07:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.193 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.193 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:39.193 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:39.455 true 00:09:39.455 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:39.455 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.716 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.716 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:39.716 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:39.978 true 00:09:39.978 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:39.978 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.238 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:40.498 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:40.498 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:40.498 true 00:09:40.498 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:40.498 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.759 07:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.019 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:41.019 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:41.019 true 00:09:41.019 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:41.019 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.278 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.538 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:41.538 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:41.538 true 00:09:41.815 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:41.815 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.815 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.077 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:42.077 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:42.077 true 00:09:42.338 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:42.338 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.338 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.600 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:42.600 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:42.860 true 00:09:42.860 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:42.860 07:04:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.860 07:04:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:43.121 07:04:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:43.121 07:04:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:43.381 true 00:09:43.381 07:04:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:43.381 07:04:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.642 07:04:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:43.642 07:04:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:09:43.642 07:04:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:09:43.902 true 00:09:43.902 07:04:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:43.902 07:04:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.161 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:44.161 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:09:44.161 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:09:44.420 true 00:09:44.420 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:44.420 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.680 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:44.939 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:09:44.939 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:09:44.939 true 00:09:44.939 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:44.939 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.198 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.458 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:09:45.458 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:09:45.458 true 00:09:45.718 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:45.719 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.719 07:04:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.978 07:04:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:09:45.978 07:04:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:09:46.238 true 00:09:46.238 07:04:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:46.238 07:04:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.238 07:04:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:46.499 07:04:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:09:46.499 07:04:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:09:46.759 true 00:09:46.759 07:04:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:46.759 07:04:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.019 07:04:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:47.019 07:04:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:09:47.019 07:04:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:09:47.280 true 00:09:47.280 07:04:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:47.280 07:04:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.540 07:04:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:47.540 07:04:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:09:47.540 07:04:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:09:47.800 true 00:09:47.800 07:04:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:47.800 07:04:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.060 07:04:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:48.320 07:04:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:09:48.321 07:04:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:09:48.321 true 00:09:48.321 07:04:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:48.321 07:04:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.580 07:04:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:48.841 07:04:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:09:48.841 07:04:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:09:48.841 true 00:09:48.841 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:48.841 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.101 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:49.362 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:09:49.362 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:09:49.621 true 00:09:49.621 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:49.621 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.621 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:49.881 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:09:49.882 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:09:50.141 true 00:09:50.141 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:50.141 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.141 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.401 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:09:50.402 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:09:50.662 true 00:09:50.662 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:50.662 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.922 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.922 07:05:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:09:50.922 07:05:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:09:51.183 true 00:09:51.183 07:05:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:51.183 07:05:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.443 07:05:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.443 07:05:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:09:51.443 07:05:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:09:51.703 true 00:09:51.703 07:05:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:51.703 07:05:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.965 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:52.226 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:09:52.226 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:09:52.226 true 00:09:52.226 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:52.226 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.488 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:52.749 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:09:52.749 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:09:52.749 true 00:09:52.749 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:52.749 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.011 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.272 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:09:53.272 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:09:53.272 true 00:09:53.533 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:53.533 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.533 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.793 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:09:53.793 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:09:54.054 true 00:09:54.054 07:05:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:54.054 07:05:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.054 07:05:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.314 07:05:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:09:54.314 07:05:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:09:54.575 true 00:09:54.575 07:05:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:54.575 07:05:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.836 07:05:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.836 07:05:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:09:54.836 07:05:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:09:55.096 true 00:09:55.096 07:05:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:55.096 07:05:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.357 07:05:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:55.357 07:05:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:09:55.358 07:05:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:09:55.618 true 00:09:55.618 07:05:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:55.618 07:05:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.879 07:05:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.140 07:05:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:09:56.140 07:05:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:09:56.140 true 00:09:56.140 07:05:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:56.140 07:05:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.401 07:05:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.661 07:05:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:09:56.661 07:05:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:09:56.661 true 00:09:56.661 07:05:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:56.661 07:05:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.922 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.182 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:09:57.182 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:09:57.182 true 00:09:57.443 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:57.443 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.443 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.443 Initializing NVMe Controllers 00:09:57.443 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:57.443 Controller IO queue size 128, less than required. 00:09:57.443 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:57.443 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:57.443 Initialization complete. Launching workers. 00:09:57.443 ======================================================== 00:09:57.443 Latency(us) 00:09:57.443 Device Information : IOPS MiB/s Average min max 00:09:57.443 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30686.60 14.98 4171.15 1123.10 11471.39 00:09:57.443 ======================================================== 00:09:57.443 Total : 30686.60 14.98 4171.15 1123.10 11471.39 00:09:57.443 00:09:57.705 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:09:57.705 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:09:57.705 true 00:09:57.705 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2179576 00:09:57.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2179576) - No such process 00:09:57.705 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2179576 00:09:57.705 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.966 07:05:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:58.225 07:05:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:58.225 07:05:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:58.225 07:05:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:58.225 07:05:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:58.225 07:05:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:58.225 null0 00:09:58.485 07:05:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:58.485 07:05:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:58.485 07:05:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:58.485 null1 00:09:58.485 07:05:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:58.485 07:05:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:58.486 07:05:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:58.746 null2 00:09:58.746 07:05:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:58.746 07:05:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:58.746 07:05:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:59.006 null3 00:09:59.006 07:05:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:59.006 07:05:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:59.006 07:05:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:59.006 null4 00:09:59.006 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:59.006 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:59.006 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:59.267 null5 00:09:59.267 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:59.267 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:59.267 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:59.267 null6 00:09:59.528 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:59.528 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:59.528 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:59.528 null7 00:09:59.528 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:59.528 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:59.528 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:59.528 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:59.528 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:59.528 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:59.528 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:59.528 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:59.528 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:59.528 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:59.528 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.528 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:59.528 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:59.528 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:59.528 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:59.528 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:59.528 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:59.528 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:59.528 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.528 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:59.528 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:59.528 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:59.528 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:59.528 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2186268 2186269 2186271 2186273 2186275 2186277 2186279 2186281 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.529 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:59.789 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:59.789 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:59.789 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:59.789 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:59.789 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:59.789 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:59.789 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:59.789 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:59.789 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.789 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.789 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:00.050 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.050 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.050 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:00.050 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.050 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.050 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:00.050 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.050 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.050 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:00.050 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.050 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.050 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:00.050 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.050 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.050 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:00.051 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.051 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.051 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:00.051 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:00.051 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.051 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.051 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:00.051 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.051 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:00.051 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:00.383 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:00.383 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:00.383 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:00.383 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.383 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.383 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:00.383 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.383 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.383 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:00.383 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:00.383 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.383 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.383 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:00.383 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.383 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.383 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:00.383 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.383 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.383 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:00.383 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.384 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.384 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:00.384 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.384 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.384 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:00.384 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:00.669 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.669 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.669 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.669 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:00.669 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:00.669 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:00.669 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:00.669 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:00.669 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.669 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.669 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:00.669 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:00.669 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.669 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.669 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.669 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.669 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:00.669 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:00.669 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.669 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.669 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:00.669 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:00.669 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.669 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.669 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:00.669 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.669 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.669 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:00.960 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:00.960 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.960 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.960 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:00.960 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:00.960 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.960 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:00.960 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.960 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.960 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:00.960 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:00.960 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:00.960 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.960 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.960 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:00.960 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.960 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.960 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:00.960 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:00.960 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.960 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.960 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:00.960 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.960 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.960 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:01.221 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.221 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.221 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:01.221 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:01.221 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.221 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.221 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:01.221 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.221 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.221 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:01.221 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:01.221 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:01.221 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:01.221 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:01.221 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.221 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.221 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.221 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:01.221 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:01.482 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:01.482 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.482 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.482 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:01.482 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.482 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.482 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:01.482 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.482 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.482 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:01.482 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.482 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.482 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:01.482 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:01.482 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.482 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.482 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:01.482 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.482 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.482 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:01.482 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:01.482 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:01.482 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.482 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.482 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:01.482 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:01.743 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.743 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.743 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:01.743 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:01.743 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.743 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.743 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.743 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:01.743 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:01.743 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.743 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.743 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:01.743 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:01.743 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.743 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.743 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:01.743 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:01.743 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.743 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.743 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:01.743 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.743 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.743 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:02.004 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.004 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.004 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:02.004 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.004 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.004 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:02.004 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:02.004 07:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:02.004 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:02.004 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.004 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.004 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:02.004 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.004 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.004 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.004 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:02.004 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.004 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.004 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:02.004 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:02.004 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:02.004 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:02.004 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.004 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.004 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:02.264 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:02.264 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.264 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.264 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:02.264 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:02.264 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.264 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.264 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:02.264 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.264 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.264 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:02.264 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.264 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.264 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:02.264 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:02.264 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:02.264 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.264 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.264 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:02.264 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.524 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.524 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.524 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:02.524 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:02.524 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:02.524 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.524 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.524 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:02.524 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:02.524 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.524 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.524 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:02.524 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:02.525 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:02.525 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.525 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.525 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:02.525 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.525 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.525 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:02.525 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.525 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.525 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:02.525 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:02.785 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.785 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.785 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:02.785 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.785 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.785 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:02.785 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:02.785 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.785 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.785 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:02.785 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.785 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:02.785 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:02.785 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:02.785 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.785 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.785 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:02.785 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:02.785 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.785 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.785 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:02.785 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:03.044 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.045 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.045 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:03.045 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.045 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.045 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:03.045 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.045 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.045 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:03.045 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.045 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.045 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:03.045 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:03.045 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.045 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.045 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:03.045 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.045 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.045 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:03.045 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.045 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:03.304 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.304 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.304 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:03.304 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:03.304 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.304 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.304 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:03.304 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.304 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.304 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.304 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.304 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.304 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.304 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.304 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.304 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.304 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.564 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:03.564 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:03.564 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:03.564 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:10:03.564 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:03.564 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:10:03.564 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:03.564 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:03.564 rmmod nvme_tcp 00:10:03.564 rmmod nvme_fabrics 00:10:03.564 rmmod nvme_keyring 00:10:03.564 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:03.564 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:10:03.564 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:10:03.564 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2179020 ']' 00:10:03.564 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2179020 00:10:03.564 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2179020 ']' 00:10:03.564 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2179020 00:10:03.564 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:10:03.564 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:03.564 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2179020 00:10:03.564 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:03.564 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:03.564 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2179020' 00:10:03.564 killing process with pid 2179020 00:10:03.564 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2179020 00:10:03.564 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2179020 00:10:03.564 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:03.564 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:03.564 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:03.564 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:10:03.564 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:10:03.564 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:03.564 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:10:03.564 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:03.564 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:03.564 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.564 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.564 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.107 07:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:06.107 00:10:06.107 real 0m48.926s 00:10:06.107 user 3m18.533s 00:10:06.107 sys 0m17.538s 00:10:06.107 07:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.107 07:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:06.107 ************************************ 00:10:06.107 END TEST nvmf_ns_hotplug_stress 00:10:06.107 ************************************ 00:10:06.107 07:05:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:06.107 07:05:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:06.107 07:05:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.107 07:05:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:06.107 ************************************ 00:10:06.107 START TEST nvmf_delete_subsystem 00:10:06.107 ************************************ 00:10:06.107 07:05:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:06.107 * Looking for test storage... 00:10:06.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:06.107 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:06.107 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:06.107 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:06.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.108 --rc genhtml_branch_coverage=1 00:10:06.108 --rc genhtml_function_coverage=1 00:10:06.108 --rc genhtml_legend=1 00:10:06.108 --rc geninfo_all_blocks=1 00:10:06.108 --rc geninfo_unexecuted_blocks=1 00:10:06.108 00:10:06.108 ' 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:06.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.108 --rc genhtml_branch_coverage=1 00:10:06.108 --rc genhtml_function_coverage=1 00:10:06.108 --rc genhtml_legend=1 00:10:06.108 --rc geninfo_all_blocks=1 00:10:06.108 --rc geninfo_unexecuted_blocks=1 00:10:06.108 00:10:06.108 ' 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:06.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.108 --rc genhtml_branch_coverage=1 00:10:06.108 --rc genhtml_function_coverage=1 00:10:06.108 --rc genhtml_legend=1 00:10:06.108 --rc geninfo_all_blocks=1 00:10:06.108 --rc geninfo_unexecuted_blocks=1 00:10:06.108 00:10:06.108 ' 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:06.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.108 --rc genhtml_branch_coverage=1 00:10:06.108 --rc genhtml_function_coverage=1 00:10:06.108 --rc genhtml_legend=1 00:10:06.108 --rc geninfo_all_blocks=1 00:10:06.108 --rc geninfo_unexecuted_blocks=1 00:10:06.108 00:10:06.108 ' 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:10:06.108 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:06.109 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:06.109 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:06.109 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:06.109 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:06.109 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:06.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:06.109 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:06.109 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:06.109 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:06.109 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:06.109 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:06.109 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:06.109 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:06.109 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:06.109 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:06.109 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.109 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.109 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.109 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:06.109 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:06.109 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:06.109 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:14.251 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:14.251 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:14.251 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:14.251 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:14.251 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:14.251 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:14.251 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:14.251 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:14.251 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:14.251 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:10:14.251 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:14.252 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:14.252 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:14.252 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:14.252 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:14.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:14.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.670 ms 00:10:14.252 00:10:14.252 --- 10.0.0.2 ping statistics --- 00:10:14.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.252 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:14.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:14.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:10:14.252 00:10:14.252 --- 10.0.0.1 ping statistics --- 00:10:14.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.252 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:14.252 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:14.253 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:14.253 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:14.253 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:14.253 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:14.253 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:14.253 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:14.253 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2191466 00:10:14.253 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2191466 00:10:14.253 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:14.253 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2191466 ']' 00:10:14.253 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.253 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:14.253 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.253 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:14.253 07:05:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:14.253 [2024-11-27 07:05:24.739407] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:10:14.253 [2024-11-27 07:05:24.739471] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:14.253 [2024-11-27 07:05:24.842315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:14.253 [2024-11-27 07:05:24.893090] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:14.253 [2024-11-27 07:05:24.893144] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:14.253 [2024-11-27 07:05:24.893153] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:14.253 [2024-11-27 07:05:24.893170] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:14.253 [2024-11-27 07:05:24.893178] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:14.253 [2024-11-27 07:05:24.894821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.253 [2024-11-27 07:05:24.894823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.514 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:14.514 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:10:14.514 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:14.514 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:14.514 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:14.515 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:14.515 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:14.515 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.515 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:14.515 [2024-11-27 07:05:25.619156] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:14.515 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.515 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:14.515 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.515 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:14.515 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.515 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:14.515 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.515 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:14.515 [2024-11-27 07:05:25.643515] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:14.515 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.515 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:14.515 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.515 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:14.515 NULL1 00:10:14.515 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.515 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:14.515 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.515 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:14.515 Delay0 00:10:14.515 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.515 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.515 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.515 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:14.515 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.515 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2191638 00:10:14.515 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:14.515 07:05:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:14.776 [2024-11-27 07:05:25.770521] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:16.686 07:05:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:16.686 07:05:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.686 07:05:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 starting I/O failed: -6 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 starting I/O failed: -6 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 starting I/O failed: -6 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 starting I/O failed: -6 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 starting I/O failed: -6 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 starting I/O failed: -6 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 starting I/O failed: -6 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 starting I/O failed: -6 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 starting I/O failed: -6 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 starting I/O failed: -6 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 starting I/O failed: -6 00:10:16.947 [2024-11-27 07:05:27.935395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d02c0 is same with the state(6) to be set 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 starting I/O failed: -6 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 starting I/O failed: -6 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 starting I/O failed: -6 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 starting I/O failed: -6 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 starting I/O failed: -6 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 starting I/O failed: -6 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 starting I/O failed: -6 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 starting I/O failed: -6 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 starting I/O failed: -6 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 starting I/O failed: -6 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 [2024-11-27 07:05:27.941802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4118000c40 is same with the state(6) to be set 00:10:16.947 starting I/O failed: -6 00:10:16.947 starting I/O failed: -6 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Read completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:16.947 Write completed with error (sct=0, sc=8) 00:10:17.887 [2024-11-27 07:05:28.907350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d19b0 is same with the state(6) to be set 00:10:17.887 Write completed with error (sct=0, sc=8) 00:10:17.887 Write completed with error (sct=0, sc=8) 00:10:17.887 Write completed with error (sct=0, sc=8) 00:10:17.887 Write completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Write completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Write completed with error (sct=0, sc=8) 00:10:17.887 Write completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 [2024-11-27 07:05:28.938562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d04a0 is same with the state(6) to be set 00:10:17.887 Write completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Write completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Write completed with error (sct=0, sc=8) 00:10:17.887 Write completed with error (sct=0, sc=8) 00:10:17.887 Write completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Write completed with error (sct=0, sc=8) 00:10:17.887 [2024-11-27 07:05:28.939109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0860 is same with the state(6) to be set 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Write completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Write completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Write completed with error (sct=0, sc=8) 00:10:17.887 Write completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Write completed with error (sct=0, sc=8) 00:10:17.887 Write completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Write completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Write completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Write completed with error (sct=0, sc=8) 00:10:17.887 [2024-11-27 07:05:28.942724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f411800d7c0 is same with the state(6) to be set 00:10:17.887 Write completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Write completed with error (sct=0, sc=8) 00:10:17.887 Write completed with error (sct=0, sc=8) 00:10:17.887 Write completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Write completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Read completed with error (sct=0, sc=8) 00:10:17.887 Write completed with error (sct=0, sc=8) 00:10:17.887 [2024-11-27 07:05:28.943807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f411800d020 is same with the state(6) to be set 00:10:17.887 Initializing NVMe Controllers 00:10:17.887 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:17.887 Controller IO queue size 128, less than required. 00:10:17.887 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:17.887 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:17.887 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:17.887 Initialization complete. Launching workers. 00:10:17.887 ======================================================== 00:10:17.887 Latency(us) 00:10:17.887 Device Information : IOPS MiB/s Average min max 00:10:17.887 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 164.32 0.08 906347.10 371.59 1006486.99 00:10:17.887 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 165.31 0.08 908833.48 325.95 1011646.33 00:10:17.887 ======================================================== 00:10:17.887 Total : 329.63 0.16 907594.04 325.95 1011646.33 00:10:17.887 00:10:17.887 [2024-11-27 07:05:28.944498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d19b0 (9): Bad file descriptor 00:10:17.887 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:17.887 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.887 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:17.887 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2191638 00:10:17.887 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:18.467 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:18.467 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2191638 00:10:18.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2191638) - No such process 00:10:18.467 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2191638 00:10:18.467 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:10:18.467 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2191638 00:10:18.467 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:10:18.467 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:18.467 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:10:18.467 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:18.467 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2191638 00:10:18.467 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:10:18.467 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:18.467 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:18.467 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:18.467 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:18.467 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.467 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:18.467 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.467 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:18.467 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.467 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:18.467 [2024-11-27 07:05:29.476508] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:18.467 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.467 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.467 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.467 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:18.467 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.467 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2192490 00:10:18.467 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:18.467 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:18.467 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2192490 00:10:18.467 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:18.467 [2024-11-27 07:05:29.581811] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:19.036 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:19.036 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2192490 00:10:19.036 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:19.606 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:19.606 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2192490 00:10:19.606 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:19.865 07:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:19.865 07:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2192490 00:10:19.865 07:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:20.435 07:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:20.435 07:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2192490 00:10:20.435 07:05:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:21.004 07:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:21.004 07:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2192490 00:10:21.004 07:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:21.574 07:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:21.574 07:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2192490 00:10:21.574 07:05:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:21.574 Initializing NVMe Controllers 00:10:21.574 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:21.574 Controller IO queue size 128, less than required. 00:10:21.574 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:21.574 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:21.574 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:21.574 Initialization complete. Launching workers. 00:10:21.574 ======================================================== 00:10:21.574 Latency(us) 00:10:21.574 Device Information : IOPS MiB/s Average min max 00:10:21.574 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001900.01 1000165.75 1005712.01 00:10:21.574 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002891.61 1000361.07 1008075.21 00:10:21.574 ======================================================== 00:10:21.574 Total : 256.00 0.12 1002395.81 1000165.75 1008075.21 00:10:21.574 00:10:21.835 07:05:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:21.835 07:05:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2192490 00:10:21.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2192490) - No such process 00:10:21.835 07:05:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2192490 00:10:21.835 07:05:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:21.835 07:05:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:21.835 07:05:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:21.835 07:05:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:10:21.835 07:05:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:21.835 07:05:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:10:21.835 07:05:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:21.835 07:05:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:22.095 rmmod nvme_tcp 00:10:22.095 rmmod nvme_fabrics 00:10:22.095 rmmod nvme_keyring 00:10:22.095 07:05:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:22.095 07:05:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:10:22.095 07:05:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:10:22.095 07:05:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2191466 ']' 00:10:22.095 07:05:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2191466 00:10:22.095 07:05:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2191466 ']' 00:10:22.095 07:05:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2191466 00:10:22.095 07:05:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:10:22.095 07:05:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:22.095 07:05:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2191466 00:10:22.095 07:05:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:22.095 07:05:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:22.095 07:05:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2191466' 00:10:22.095 killing process with pid 2191466 00:10:22.095 07:05:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2191466 00:10:22.095 07:05:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2191466 00:10:22.095 07:05:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:22.095 07:05:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:22.095 07:05:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:22.096 07:05:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:10:22.096 07:05:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:10:22.096 07:05:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:22.096 07:05:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:22.096 07:05:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:22.096 07:05:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:22.096 07:05:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.096 07:05:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:22.096 07:05:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:24.640 00:10:24.640 real 0m18.452s 00:10:24.640 user 0m31.030s 00:10:24.640 sys 0m6.743s 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:24.640 ************************************ 00:10:24.640 END TEST nvmf_delete_subsystem 00:10:24.640 ************************************ 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:24.640 ************************************ 00:10:24.640 START TEST nvmf_host_management 00:10:24.640 ************************************ 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:24.640 * Looking for test storage... 00:10:24.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:24.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.640 --rc genhtml_branch_coverage=1 00:10:24.640 --rc genhtml_function_coverage=1 00:10:24.640 --rc genhtml_legend=1 00:10:24.640 --rc geninfo_all_blocks=1 00:10:24.640 --rc geninfo_unexecuted_blocks=1 00:10:24.640 00:10:24.640 ' 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:24.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.640 --rc genhtml_branch_coverage=1 00:10:24.640 --rc genhtml_function_coverage=1 00:10:24.640 --rc genhtml_legend=1 00:10:24.640 --rc geninfo_all_blocks=1 00:10:24.640 --rc geninfo_unexecuted_blocks=1 00:10:24.640 00:10:24.640 ' 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:24.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.640 --rc genhtml_branch_coverage=1 00:10:24.640 --rc genhtml_function_coverage=1 00:10:24.640 --rc genhtml_legend=1 00:10:24.640 --rc geninfo_all_blocks=1 00:10:24.640 --rc geninfo_unexecuted_blocks=1 00:10:24.640 00:10:24.640 ' 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:24.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.640 --rc genhtml_branch_coverage=1 00:10:24.640 --rc genhtml_function_coverage=1 00:10:24.640 --rc genhtml_legend=1 00:10:24.640 --rc geninfo_all_blocks=1 00:10:24.640 --rc geninfo_unexecuted_blocks=1 00:10:24.640 00:10:24.640 ' 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:24.640 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.641 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.641 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.641 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:10:24.641 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.641 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:10:24.641 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:24.641 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:24.641 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:24.641 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:24.641 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:24.641 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:24.641 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:24.641 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:24.641 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:24.641 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:24.641 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:24.641 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:24.641 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:10:24.641 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:24.641 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:24.641 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:24.641 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:24.641 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:24.641 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.641 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:24.641 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.641 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:24.641 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:24.641 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:10:24.641 07:05:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:32.785 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:32.785 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:32.785 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:32.786 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:32.786 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:32.786 07:05:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:32.786 07:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:32.786 07:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:32.786 07:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:32.786 07:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:32.786 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:32.786 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:10:32.786 00:10:32.786 --- 10.0.0.2 ping statistics --- 00:10:32.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.786 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:10:32.786 07:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:32.786 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:32.786 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:10:32.786 00:10:32.786 --- 10.0.0.1 ping statistics --- 00:10:32.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.786 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:10:32.786 07:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:32.786 07:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:10:32.786 07:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:32.786 07:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:32.786 07:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:32.786 07:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:32.786 07:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:32.786 07:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:32.786 07:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:32.786 07:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:10:32.786 07:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:10:32.786 07:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:32.786 07:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:32.786 07:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:32.786 07:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:32.786 07:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2197424 00:10:32.786 07:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2197424 00:10:32.786 07:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:32.786 07:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2197424 ']' 00:10:32.786 07:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.786 07:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:32.786 07:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.786 07:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:32.786 07:05:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:32.786 [2024-11-27 07:05:43.255587] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:10:32.786 [2024-11-27 07:05:43.255659] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.786 [2024-11-27 07:05:43.357014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:32.786 [2024-11-27 07:05:43.410464] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:32.786 [2024-11-27 07:05:43.410520] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:32.786 [2024-11-27 07:05:43.410529] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:32.786 [2024-11-27 07:05:43.410540] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:32.786 [2024-11-27 07:05:43.410546] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:32.786 [2024-11-27 07:05:43.412921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:32.786 [2024-11-27 07:05:43.413067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:32.786 [2024-11-27 07:05:43.413228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.786 [2024-11-27 07:05:43.413228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:33.048 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:33.048 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:10:33.048 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:33.048 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:33.048 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:33.048 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:33.048 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:33.048 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.048 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:33.048 [2024-11-27 07:05:44.126837] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:33.048 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.048 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:33.048 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:33.048 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:33.048 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:33.048 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:10:33.048 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:10:33.048 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.048 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:33.048 Malloc0 00:10:33.048 [2024-11-27 07:05:44.203506] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:33.048 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.048 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:33.048 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:33.048 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:33.311 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2197582 00:10:33.311 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2197582 /var/tmp/bdevperf.sock 00:10:33.311 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2197582 ']' 00:10:33.311 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:33.311 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:33.311 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:33.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:33.311 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:33.311 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:33.311 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:33.311 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:33.311 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:10:33.311 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:10:33.311 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:33.311 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:33.311 { 00:10:33.311 "params": { 00:10:33.311 "name": "Nvme$subsystem", 00:10:33.311 "trtype": "$TEST_TRANSPORT", 00:10:33.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:33.311 "adrfam": "ipv4", 00:10:33.311 "trsvcid": "$NVMF_PORT", 00:10:33.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:33.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:33.311 "hdgst": ${hdgst:-false}, 00:10:33.311 "ddgst": ${ddgst:-false} 00:10:33.311 }, 00:10:33.311 "method": "bdev_nvme_attach_controller" 00:10:33.311 } 00:10:33.311 EOF 00:10:33.311 )") 00:10:33.311 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:10:33.311 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:10:33.311 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:10:33.311 07:05:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:33.311 "params": { 00:10:33.311 "name": "Nvme0", 00:10:33.311 "trtype": "tcp", 00:10:33.311 "traddr": "10.0.0.2", 00:10:33.311 "adrfam": "ipv4", 00:10:33.311 "trsvcid": "4420", 00:10:33.311 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:33.311 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:33.311 "hdgst": false, 00:10:33.311 "ddgst": false 00:10:33.311 }, 00:10:33.311 "method": "bdev_nvme_attach_controller" 00:10:33.311 }' 00:10:33.311 [2024-11-27 07:05:44.313324] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:10:33.311 [2024-11-27 07:05:44.313393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2197582 ] 00:10:33.311 [2024-11-27 07:05:44.406167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.311 [2024-11-27 07:05:44.460123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.573 Running I/O for 10 seconds... 00:10:34.146 07:05:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:34.146 07:05:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:10:34.146 07:05:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:34.146 07:05:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.146 07:05:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:34.146 07:05:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.146 07:05:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:34.146 07:05:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:34.146 07:05:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:34.146 07:05:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:34.146 07:05:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:10:34.146 07:05:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:10:34.146 07:05:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:34.146 07:05:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:34.146 07:05:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:34.146 07:05:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:34.146 07:05:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.146 07:05:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:34.146 07:05:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.146 07:05:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:10:34.146 07:05:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:10:34.146 07:05:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:10:34.146 07:05:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:10:34.146 07:05:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:10:34.146 07:05:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:34.146 07:05:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.146 07:05:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:34.146 [2024-11-27 07:05:45.219969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86aab0 is same with the state(6) to be set 00:10:34.146 [2024-11-27 07:05:45.220822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.146 [2024-11-27 07:05:45.220886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.146 [2024-11-27 07:05:45.220913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.146 [2024-11-27 07:05:45.220922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.146 [2024-11-27 07:05:45.220933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.146 [2024-11-27 07:05:45.220941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.146 [2024-11-27 07:05:45.220952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.146 [2024-11-27 07:05:45.220960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.146 [2024-11-27 07:05:45.220980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.146 [2024-11-27 07:05:45.220988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.146 [2024-11-27 07:05:45.220998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.146 [2024-11-27 07:05:45.221006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.146 [2024-11-27 07:05:45.221016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.146 [2024-11-27 07:05:45.221023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.146 [2024-11-27 07:05:45.221033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.146 [2024-11-27 07:05:45.221040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.146 [2024-11-27 07:05:45.221050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.146 [2024-11-27 07:05:45.221059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.146 [2024-11-27 07:05:45.221069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.146 [2024-11-27 07:05:45.221076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.146 [2024-11-27 07:05:45.221086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.146 [2024-11-27 07:05:45.221095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.146 [2024-11-27 07:05:45.221104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.146 [2024-11-27 07:05:45.221112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.146 [2024-11-27 07:05:45.221122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.146 [2024-11-27 07:05:45.221130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.146 [2024-11-27 07:05:45.221141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.146 [2024-11-27 07:05:45.221148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.146 [2024-11-27 07:05:45.221167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.146 [2024-11-27 07:05:45.221175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.146 [2024-11-27 07:05:45.221185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.146 [2024-11-27 07:05:45.221192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.146 [2024-11-27 07:05:45.221202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.146 [2024-11-27 07:05:45.221212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.146 [2024-11-27 07:05:45.221222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.146 [2024-11-27 07:05:45.221231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.146 [2024-11-27 07:05:45.221241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.146 [2024-11-27 07:05:45.221249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.146 [2024-11-27 07:05:45.221259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.146 [2024-11-27 07:05:45.221267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.146 [2024-11-27 07:05:45.221276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.146 [2024-11-27 07:05:45.221284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.146 [2024-11-27 07:05:45.221294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.146 [2024-11-27 07:05:45.221301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.146 [2024-11-27 07:05:45.221311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.146 [2024-11-27 07:05:45.221318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.146 [2024-11-27 07:05:45.221328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.146 [2024-11-27 07:05:45.221336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.146 [2024-11-27 07:05:45.221346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.146 [2024-11-27 07:05:45.221354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.146 [2024-11-27 07:05:45.221364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.146 [2024-11-27 07:05:45.221371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.146 [2024-11-27 07:05:45.221381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.146 [2024-11-27 07:05:45.221388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.146 [2024-11-27 07:05:45.221398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.146 [2024-11-27 07:05:45.221406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.146 [2024-11-27 07:05:45.221416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.146 [2024-11-27 07:05:45.221423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.146 [2024-11-27 07:05:45.221432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.146 [2024-11-27 07:05:45.221441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.146 [2024-11-27 07:05:45.221451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.146 [2024-11-27 07:05:45.221459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.146 [2024-11-27 07:05:45.221469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.146 [2024-11-27 07:05:45.221476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.146 [2024-11-27 07:05:45.221486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.147 [2024-11-27 07:05:45.221494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.147 [2024-11-27 07:05:45.221505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.147 [2024-11-27 07:05:45.221513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.147 [2024-11-27 07:05:45.221523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.147 [2024-11-27 07:05:45.221531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.147 [2024-11-27 07:05:45.221541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.147 [2024-11-27 07:05:45.221548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.147 [2024-11-27 07:05:45.221558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.147 [2024-11-27 07:05:45.221566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.147 [2024-11-27 07:05:45.221577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.147 [2024-11-27 07:05:45.221584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.147 [2024-11-27 07:05:45.221594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.147 [2024-11-27 07:05:45.221601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.147 [2024-11-27 07:05:45.221611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.147 [2024-11-27 07:05:45.221619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.147 [2024-11-27 07:05:45.221629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.147 [2024-11-27 07:05:45.221636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.147 [2024-11-27 07:05:45.221646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.147 [2024-11-27 07:05:45.221655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.147 [2024-11-27 07:05:45.221667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.147 [2024-11-27 07:05:45.221675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.147 [2024-11-27 07:05:45.221687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.147 [2024-11-27 07:05:45.221695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.147 [2024-11-27 07:05:45.221705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.147 [2024-11-27 07:05:45.221713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.147 [2024-11-27 07:05:45.221723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.147 [2024-11-27 07:05:45.221730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.147 [2024-11-27 07:05:45.221740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.147 [2024-11-27 07:05:45.221747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.147 [2024-11-27 07:05:45.221758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.147 [2024-11-27 07:05:45.221766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.147 [2024-11-27 07:05:45.221776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.147 [2024-11-27 07:05:45.221784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.147 [2024-11-27 07:05:45.221793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.147 [2024-11-27 07:05:45.221801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.147 [2024-11-27 07:05:45.221812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.147 [2024-11-27 07:05:45.221820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.147 [2024-11-27 07:05:45.221830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.147 [2024-11-27 07:05:45.221837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.147 [2024-11-27 07:05:45.221846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.147 [2024-11-27 07:05:45.221853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.147 [2024-11-27 07:05:45.221865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.147 [2024-11-27 07:05:45.221872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.147 [2024-11-27 07:05:45.221882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.147 [2024-11-27 07:05:45.221892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.147 [2024-11-27 07:05:45.221901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.147 [2024-11-27 07:05:45.221909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.147 [2024-11-27 07:05:45.221919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.147 [2024-11-27 07:05:45.221927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.147 [2024-11-27 07:05:45.221936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.147 [2024-11-27 07:05:45.221944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.147 [2024-11-27 07:05:45.221953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.147 [2024-11-27 07:05:45.221960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.147 [2024-11-27 07:05:45.221970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.147 [2024-11-27 07:05:45.221978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.147 [2024-11-27 07:05:45.221988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.147 [2024-11-27 07:05:45.221995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.147 [2024-11-27 07:05:45.222005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.147 [2024-11-27 07:05:45.222012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.147 [2024-11-27 07:05:45.222022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.147 [2024-11-27 07:05:45.222030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.147 [2024-11-27 07:05:45.222040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:34.147 [2024-11-27 07:05:45.222047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.147 [2024-11-27 07:05:45.222056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25b8ee0 is same with the state(6) to be set 00:10:34.147 [2024-11-27 07:05:45.223369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:10:34.147 07:05:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.147 07:05:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:34.147 07:05:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.147 07:05:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:34.147 task offset: 90112 on job bdev=Nvme0n1 fails 00:10:34.147 00:10:34.147 Latency(us) 00:10:34.147 [2024-11-27T06:05:45.352Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:34.147 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:34.147 Job: Nvme0n1 ended in about 0.51 seconds with error 00:10:34.147 Verification LBA range: start 0x0 length 0x400 00:10:34.147 Nvme0n1 : 0.51 1380.02 86.25 125.46 0.00 41406.95 4751.36 36044.80 00:10:34.147 [2024-11-27T06:05:45.352Z] =================================================================================================================== 00:10:34.147 [2024-11-27T06:05:45.352Z] Total : 1380.02 86.25 125.46 0.00 41406.95 4751.36 36044.80 00:10:34.147 [2024-11-27 07:05:45.225883] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:34.147 [2024-11-27 07:05:45.225927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a0010 (9): Bad file descriptor 00:10:34.147 07:05:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.147 07:05:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:10:34.147 [2024-11-27 07:05:45.241330] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:10:35.087 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2197582 00:10:35.087 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2197582) - No such process 00:10:35.087 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:10:35.088 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:35.088 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:35.088 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:35.088 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:10:35.088 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:10:35.088 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:35.088 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:35.088 { 00:10:35.088 "params": { 00:10:35.088 "name": "Nvme$subsystem", 00:10:35.088 "trtype": "$TEST_TRANSPORT", 00:10:35.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:35.088 "adrfam": "ipv4", 00:10:35.088 "trsvcid": "$NVMF_PORT", 00:10:35.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:35.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:35.088 "hdgst": ${hdgst:-false}, 00:10:35.088 "ddgst": ${ddgst:-false} 00:10:35.088 }, 00:10:35.088 "method": "bdev_nvme_attach_controller" 00:10:35.088 } 00:10:35.088 EOF 00:10:35.088 )") 00:10:35.088 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:10:35.088 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:10:35.088 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:10:35.088 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:35.088 "params": { 00:10:35.088 "name": "Nvme0", 00:10:35.088 "trtype": "tcp", 00:10:35.088 "traddr": "10.0.0.2", 00:10:35.088 "adrfam": "ipv4", 00:10:35.088 "trsvcid": "4420", 00:10:35.088 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:35.088 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:35.088 "hdgst": false, 00:10:35.088 "ddgst": false 00:10:35.088 }, 00:10:35.088 "method": "bdev_nvme_attach_controller" 00:10:35.088 }' 00:10:35.349 [2024-11-27 07:05:46.297788] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:10:35.349 [2024-11-27 07:05:46.297843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2197958 ] 00:10:35.349 [2024-11-27 07:05:46.384833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.349 [2024-11-27 07:05:46.420437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.610 Running I/O for 1 seconds... 00:10:36.551 1666.00 IOPS, 104.12 MiB/s 00:10:36.551 Latency(us) 00:10:36.551 [2024-11-27T06:05:47.756Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:36.551 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:36.551 Verification LBA range: start 0x0 length 0x400 00:10:36.551 Nvme0n1 : 1.03 1678.75 104.92 0.00 0.00 37454.52 6362.45 32112.64 00:10:36.551 [2024-11-27T06:05:47.756Z] =================================================================================================================== 00:10:36.551 [2024-11-27T06:05:47.756Z] Total : 1678.75 104.92 0.00 0.00 37454.52 6362.45 32112.64 00:10:36.551 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:10:36.551 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:36.551 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:10:36.551 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:36.551 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:10:36.551 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:36.551 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:10:36.551 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:36.551 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:10:36.551 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:36.551 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:36.551 rmmod nvme_tcp 00:10:36.812 rmmod nvme_fabrics 00:10:36.812 rmmod nvme_keyring 00:10:36.812 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:36.812 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:10:36.812 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:10:36.812 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2197424 ']' 00:10:36.812 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2197424 00:10:36.812 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2197424 ']' 00:10:36.812 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2197424 00:10:36.812 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:10:36.812 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:36.812 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2197424 00:10:36.812 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:36.812 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:36.812 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2197424' 00:10:36.812 killing process with pid 2197424 00:10:36.812 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2197424 00:10:36.812 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2197424 00:10:36.812 [2024-11-27 07:05:47.957149] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:36.812 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:36.812 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:36.812 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:36.812 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:10:36.812 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:10:36.812 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:36.812 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:10:36.812 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:36.812 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:36.812 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.812 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.812 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.359 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:39.359 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:39.359 00:10:39.359 real 0m14.628s 00:10:39.359 user 0m22.983s 00:10:39.359 sys 0m6.734s 00:10:39.359 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:39.359 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:39.359 ************************************ 00:10:39.359 END TEST nvmf_host_management 00:10:39.359 ************************************ 00:10:39.359 07:05:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:39.359 07:05:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:39.359 07:05:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.359 07:05:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:39.359 ************************************ 00:10:39.359 START TEST nvmf_lvol 00:10:39.359 ************************************ 00:10:39.359 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:39.359 * Looking for test storage... 00:10:39.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:39.359 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:39.359 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:39.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.360 --rc genhtml_branch_coverage=1 00:10:39.360 --rc genhtml_function_coverage=1 00:10:39.360 --rc genhtml_legend=1 00:10:39.360 --rc geninfo_all_blocks=1 00:10:39.360 --rc geninfo_unexecuted_blocks=1 00:10:39.360 00:10:39.360 ' 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:39.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.360 --rc genhtml_branch_coverage=1 00:10:39.360 --rc genhtml_function_coverage=1 00:10:39.360 --rc genhtml_legend=1 00:10:39.360 --rc geninfo_all_blocks=1 00:10:39.360 --rc geninfo_unexecuted_blocks=1 00:10:39.360 00:10:39.360 ' 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:39.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.360 --rc genhtml_branch_coverage=1 00:10:39.360 --rc genhtml_function_coverage=1 00:10:39.360 --rc genhtml_legend=1 00:10:39.360 --rc geninfo_all_blocks=1 00:10:39.360 --rc geninfo_unexecuted_blocks=1 00:10:39.360 00:10:39.360 ' 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:39.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.360 --rc genhtml_branch_coverage=1 00:10:39.360 --rc genhtml_function_coverage=1 00:10:39.360 --rc genhtml_legend=1 00:10:39.360 --rc geninfo_all_blocks=1 00:10:39.360 --rc geninfo_unexecuted_blocks=1 00:10:39.360 00:10:39.360 ' 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:39.360 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:39.360 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:39.361 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:39.361 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:39.361 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:39.361 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:39.361 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.361 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:39.361 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.361 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:39.361 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:39.361 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:10:39.361 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:47.506 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:47.506 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:10:47.506 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:47.506 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:47.506 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:47.506 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:47.506 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:47.506 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:10:47.506 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:47.506 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:10:47.506 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:10:47.506 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:10:47.506 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:10:47.506 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:10:47.506 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:10:47.506 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:47.506 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:47.506 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:47.506 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:47.506 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:47.506 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:47.506 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:47.506 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:47.506 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:47.506 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:47.506 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:47.506 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:47.506 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:47.506 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:47.506 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:47.506 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:47.506 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:47.506 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:47.506 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:47.506 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:47.506 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:47.507 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:47.507 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:47.507 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:47.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:47.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.591 ms 00:10:47.507 00:10:47.507 --- 10.0.0.2 ping statistics --- 00:10:47.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.507 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:47.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:47.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:10:47.507 00:10:47.507 --- 10.0.0.1 ping statistics --- 00:10:47.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.507 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2202611 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2202611 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2202611 ']' 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:47.507 07:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:47.507 [2024-11-27 07:05:57.942523] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:10:47.507 [2024-11-27 07:05:57.942587] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.507 [2024-11-27 07:05:58.043374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:47.507 [2024-11-27 07:05:58.096346] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:47.507 [2024-11-27 07:05:58.096402] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:47.507 [2024-11-27 07:05:58.096411] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:47.507 [2024-11-27 07:05:58.096419] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:47.507 [2024-11-27 07:05:58.096425] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:47.507 [2024-11-27 07:05:58.098289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:47.507 [2024-11-27 07:05:58.098575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:47.507 [2024-11-27 07:05:58.098577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.768 07:05:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:47.768 07:05:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:10:47.768 07:05:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:47.768 07:05:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:47.768 07:05:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:47.768 07:05:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:47.768 07:05:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:48.029 [2024-11-27 07:05:58.976358] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:48.029 07:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:48.319 07:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:48.319 07:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:48.319 07:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:48.319 07:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:48.579 07:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:48.840 07:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f493ccee-48e4-4a37-8e0c-a94193e0b91c 00:10:48.840 07:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f493ccee-48e4-4a37-8e0c-a94193e0b91c lvol 20 00:10:49.100 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=60c936c6-1b71-4900-b43a-c63a31d4bd4d 00:10:49.100 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:49.100 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 60c936c6-1b71-4900-b43a-c63a31d4bd4d 00:10:49.359 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:49.618 [2024-11-27 07:06:00.617282] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:49.618 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:49.877 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2203361 00:10:49.877 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:49.877 07:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:50.814 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 60c936c6-1b71-4900-b43a-c63a31d4bd4d MY_SNAPSHOT 00:10:51.074 07:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d65d0d20-b4a8-428b-94ce-fea42a76301e 00:10:51.074 07:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 60c936c6-1b71-4900-b43a-c63a31d4bd4d 30 00:10:51.334 07:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone d65d0d20-b4a8-428b-94ce-fea42a76301e MY_CLONE 00:10:51.334 07:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=8db48585-8375-46fa-b09f-eb38b9be4839 00:10:51.334 07:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 8db48585-8375-46fa-b09f-eb38b9be4839 00:10:51.903 07:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2203361 00:11:01.918 Initializing NVMe Controllers 00:11:01.918 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:01.918 Controller IO queue size 128, less than required. 00:11:01.918 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:01.918 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:01.918 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:01.918 Initialization complete. Launching workers. 00:11:01.918 ======================================================== 00:11:01.918 Latency(us) 00:11:01.918 Device Information : IOPS MiB/s Average min max 00:11:01.918 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15854.10 61.93 8075.17 1542.89 40619.26 00:11:01.918 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17212.70 67.24 7436.30 791.27 56886.14 00:11:01.918 ======================================================== 00:11:01.918 Total : 33066.80 129.17 7742.61 791.27 56886.14 00:11:01.918 00:11:01.918 07:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:01.918 07:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 60c936c6-1b71-4900-b43a-c63a31d4bd4d 00:11:01.918 07:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f493ccee-48e4-4a37-8e0c-a94193e0b91c 00:11:01.918 07:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:01.918 07:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:01.918 07:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:01.918 07:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:01.918 07:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:11:01.918 07:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:01.918 07:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:11:01.918 07:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:01.918 07:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:01.918 rmmod nvme_tcp 00:11:01.918 rmmod nvme_fabrics 00:11:01.918 rmmod nvme_keyring 00:11:01.918 07:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:01.918 07:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:11:01.918 07:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:11:01.918 07:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2202611 ']' 00:11:01.918 07:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2202611 00:11:01.918 07:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2202611 ']' 00:11:01.918 07:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2202611 00:11:01.918 07:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:11:01.918 07:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.918 07:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2202611 00:11:01.918 07:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:01.918 07:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:01.918 07:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2202611' 00:11:01.918 killing process with pid 2202611 00:11:01.918 07:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2202611 00:11:01.918 07:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2202611 00:11:01.918 07:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:01.918 07:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:01.918 07:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:01.918 07:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:11:01.918 07:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:11:01.918 07:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:01.918 07:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:11:01.918 07:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:01.918 07:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:01.918 07:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.918 07:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.918 07:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.356 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:03.356 00:11:03.356 real 0m24.026s 00:11:03.356 user 1m5.078s 00:11:03.356 sys 0m8.688s 00:11:03.356 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.356 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:03.356 ************************************ 00:11:03.356 END TEST nvmf_lvol 00:11:03.356 ************************************ 00:11:03.356 07:06:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:03.356 07:06:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:03.356 07:06:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.356 07:06:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:03.356 ************************************ 00:11:03.356 START TEST nvmf_lvs_grow 00:11:03.356 ************************************ 00:11:03.356 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:03.356 * Looking for test storage... 00:11:03.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:03.356 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:03.356 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:11:03.356 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:03.356 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:03.356 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:03.356 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:03.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.357 --rc genhtml_branch_coverage=1 00:11:03.357 --rc genhtml_function_coverage=1 00:11:03.357 --rc genhtml_legend=1 00:11:03.357 --rc geninfo_all_blocks=1 00:11:03.357 --rc geninfo_unexecuted_blocks=1 00:11:03.357 00:11:03.357 ' 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:03.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.357 --rc genhtml_branch_coverage=1 00:11:03.357 --rc genhtml_function_coverage=1 00:11:03.357 --rc genhtml_legend=1 00:11:03.357 --rc geninfo_all_blocks=1 00:11:03.357 --rc geninfo_unexecuted_blocks=1 00:11:03.357 00:11:03.357 ' 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:03.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.357 --rc genhtml_branch_coverage=1 00:11:03.357 --rc genhtml_function_coverage=1 00:11:03.357 --rc genhtml_legend=1 00:11:03.357 --rc geninfo_all_blocks=1 00:11:03.357 --rc geninfo_unexecuted_blocks=1 00:11:03.357 00:11:03.357 ' 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:03.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.357 --rc genhtml_branch_coverage=1 00:11:03.357 --rc genhtml_function_coverage=1 00:11:03.357 --rc genhtml_legend=1 00:11:03.357 --rc geninfo_all_blocks=1 00:11:03.357 --rc geninfo_unexecuted_blocks=1 00:11:03.357 00:11:03.357 ' 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.357 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.358 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:03.358 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.358 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:11:03.358 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:03.358 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:03.358 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:03.358 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:03.358 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:03.358 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:03.358 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:03.358 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:03.358 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:03.358 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:03.358 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:03.358 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:03.358 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:03.358 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:03.358 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:03.358 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:03.358 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:03.358 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:03.358 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.358 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.358 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.358 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:03.358 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:03.358 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:11:03.358 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:11.495 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:11.495 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:11.495 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:11.495 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:11.495 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:11.496 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:11.496 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:11.496 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:11.496 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:11.496 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:11.496 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:11.496 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:11.496 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:11.496 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:11.496 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:11.496 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:11.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:11.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:11:11.496 00:11:11.496 --- 10.0.0.2 ping statistics --- 00:11:11.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.496 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:11:11.496 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:11.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:11.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:11:11.496 00:11:11.496 --- 10.0.0.1 ping statistics --- 00:11:11.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.496 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:11:11.496 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:11.496 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:11:11.496 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:11.496 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:11.496 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:11.496 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:11.496 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:11.496 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:11.496 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:11.496 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:11.496 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:11.496 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:11.496 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:11.496 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2210257 00:11:11.496 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2210257 00:11:11.496 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:11.496 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2210257 ']' 00:11:11.496 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.496 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:11.496 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.496 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:11.496 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:11.496 [2024-11-27 07:06:22.037275] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:11:11.496 [2024-11-27 07:06:22.037350] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.496 [2024-11-27 07:06:22.139443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.496 [2024-11-27 07:06:22.190501] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:11.496 [2024-11-27 07:06:22.190556] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:11.496 [2024-11-27 07:06:22.190565] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:11.496 [2024-11-27 07:06:22.190572] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:11.496 [2024-11-27 07:06:22.190578] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:11.496 [2024-11-27 07:06:22.191353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.757 07:06:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:11.757 07:06:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:11:11.757 07:06:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:11.757 07:06:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:11.757 07:06:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:11.758 07:06:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:11.758 07:06:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:12.018 [2024-11-27 07:06:23.067386] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:12.018 07:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:12.018 07:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:12.018 07:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.018 07:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:12.018 ************************************ 00:11:12.018 START TEST lvs_grow_clean 00:11:12.018 ************************************ 00:11:12.018 07:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:11:12.018 07:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:12.018 07:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:12.018 07:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:12.018 07:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:12.018 07:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:12.018 07:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:12.018 07:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:12.018 07:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:12.018 07:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:12.278 07:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:12.278 07:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:12.538 07:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=66ad6246-d80d-453f-8d70-91141c3c9698 00:11:12.538 07:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 66ad6246-d80d-453f-8d70-91141c3c9698 00:11:12.538 07:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:12.538 07:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:12.538 07:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:12.538 07:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 66ad6246-d80d-453f-8d70-91141c3c9698 lvol 150 00:11:12.799 07:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=ec907349-c444-4761-a3f5-a8dccee827e4 00:11:12.799 07:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:12.799 07:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:13.058 [2024-11-27 07:06:24.036491] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:13.058 [2024-11-27 07:06:24.036565] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:13.058 true 00:11:13.058 07:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 66ad6246-d80d-453f-8d70-91141c3c9698 00:11:13.059 07:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:13.059 07:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:13.059 07:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:13.319 07:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ec907349-c444-4761-a3f5-a8dccee827e4 00:11:13.579 07:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:13.579 [2024-11-27 07:06:24.730718] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:13.579 07:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:13.839 07:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2210961 00:11:13.839 07:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:13.839 07:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:13.839 07:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2210961 /var/tmp/bdevperf.sock 00:11:13.839 07:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2210961 ']' 00:11:13.839 07:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:13.839 07:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:13.839 07:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:13.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:13.839 07:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:13.840 07:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:13.840 [2024-11-27 07:06:24.978057] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:11:13.840 [2024-11-27 07:06:24.978133] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2210961 ] 00:11:14.100 [2024-11-27 07:06:25.070542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.101 [2024-11-27 07:06:25.122860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:14.672 07:06:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:14.672 07:06:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:11:14.672 07:06:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:14.932 Nvme0n1 00:11:14.932 07:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:15.193 [ 00:11:15.193 { 00:11:15.193 "name": "Nvme0n1", 00:11:15.193 "aliases": [ 00:11:15.193 "ec907349-c444-4761-a3f5-a8dccee827e4" 00:11:15.193 ], 00:11:15.193 "product_name": "NVMe disk", 00:11:15.193 "block_size": 4096, 00:11:15.193 "num_blocks": 38912, 00:11:15.193 "uuid": "ec907349-c444-4761-a3f5-a8dccee827e4", 00:11:15.193 "numa_id": 0, 00:11:15.193 "assigned_rate_limits": { 00:11:15.193 "rw_ios_per_sec": 0, 00:11:15.193 "rw_mbytes_per_sec": 0, 00:11:15.193 "r_mbytes_per_sec": 0, 00:11:15.193 "w_mbytes_per_sec": 0 00:11:15.193 }, 00:11:15.193 "claimed": false, 00:11:15.193 "zoned": false, 00:11:15.193 "supported_io_types": { 00:11:15.193 "read": true, 00:11:15.193 "write": true, 00:11:15.193 "unmap": true, 00:11:15.193 "flush": true, 00:11:15.193 "reset": true, 00:11:15.193 "nvme_admin": true, 00:11:15.193 "nvme_io": true, 00:11:15.193 "nvme_io_md": false, 00:11:15.193 "write_zeroes": true, 00:11:15.193 "zcopy": false, 00:11:15.193 "get_zone_info": false, 00:11:15.193 "zone_management": false, 00:11:15.193 "zone_append": false, 00:11:15.193 "compare": true, 00:11:15.193 "compare_and_write": true, 00:11:15.193 "abort": true, 00:11:15.193 "seek_hole": false, 00:11:15.193 "seek_data": false, 00:11:15.193 "copy": true, 00:11:15.193 "nvme_iov_md": false 00:11:15.193 }, 00:11:15.193 "memory_domains": [ 00:11:15.193 { 00:11:15.193 "dma_device_id": "system", 00:11:15.193 "dma_device_type": 1 00:11:15.193 } 00:11:15.193 ], 00:11:15.193 "driver_specific": { 00:11:15.193 "nvme": [ 00:11:15.193 { 00:11:15.193 "trid": { 00:11:15.193 "trtype": "TCP", 00:11:15.193 "adrfam": "IPv4", 00:11:15.193 "traddr": "10.0.0.2", 00:11:15.193 "trsvcid": "4420", 00:11:15.193 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:15.193 }, 00:11:15.193 "ctrlr_data": { 00:11:15.193 "cntlid": 1, 00:11:15.193 "vendor_id": "0x8086", 00:11:15.193 "model_number": "SPDK bdev Controller", 00:11:15.193 "serial_number": "SPDK0", 00:11:15.193 "firmware_revision": "25.01", 00:11:15.193 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:15.193 "oacs": { 00:11:15.193 "security": 0, 00:11:15.193 "format": 0, 00:11:15.193 "firmware": 0, 00:11:15.193 "ns_manage": 0 00:11:15.193 }, 00:11:15.193 "multi_ctrlr": true, 00:11:15.193 "ana_reporting": false 00:11:15.193 }, 00:11:15.193 "vs": { 00:11:15.193 "nvme_version": "1.3" 00:11:15.194 }, 00:11:15.194 "ns_data": { 00:11:15.194 "id": 1, 00:11:15.194 "can_share": true 00:11:15.194 } 00:11:15.194 } 00:11:15.194 ], 00:11:15.194 "mp_policy": "active_passive" 00:11:15.194 } 00:11:15.194 } 00:11:15.194 ] 00:11:15.194 07:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2211159 00:11:15.194 07:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:15.194 07:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:15.454 Running I/O for 10 seconds... 00:11:16.403 Latency(us) 00:11:16.403 [2024-11-27T06:06:27.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:16.403 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:16.403 Nvme0n1 : 1.00 24159.00 94.37 0.00 0.00 0.00 0.00 0.00 00:11:16.403 [2024-11-27T06:06:27.608Z] =================================================================================================================== 00:11:16.403 [2024-11-27T06:06:27.608Z] Total : 24159.00 94.37 0.00 0.00 0.00 0.00 0.00 00:11:16.403 00:11:17.349 07:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 66ad6246-d80d-453f-8d70-91141c3c9698 00:11:17.349 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:17.349 Nvme0n1 : 2.00 24818.00 96.95 0.00 0.00 0.00 0.00 0.00 00:11:17.349 [2024-11-27T06:06:28.554Z] =================================================================================================================== 00:11:17.349 [2024-11-27T06:06:28.554Z] Total : 24818.00 96.95 0.00 0.00 0.00 0.00 0.00 00:11:17.349 00:11:17.349 true 00:11:17.349 07:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 66ad6246-d80d-453f-8d70-91141c3c9698 00:11:17.349 07:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:17.609 07:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:17.609 07:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:17.609 07:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2211159 00:11:18.561 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:18.562 Nvme0n1 : 3.00 25073.00 97.94 0.00 0.00 0.00 0.00 0.00 00:11:18.562 [2024-11-27T06:06:29.767Z] =================================================================================================================== 00:11:18.562 [2024-11-27T06:06:29.767Z] Total : 25073.00 97.94 0.00 0.00 0.00 0.00 0.00 00:11:18.562 00:11:19.503 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:19.503 Nvme0n1 : 4.00 25201.00 98.44 0.00 0.00 0.00 0.00 0.00 00:11:19.503 [2024-11-27T06:06:30.708Z] =================================================================================================================== 00:11:19.503 [2024-11-27T06:06:30.708Z] Total : 25201.00 98.44 0.00 0.00 0.00 0.00 0.00 00:11:19.503 00:11:20.442 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:20.442 Nvme0n1 : 5.00 25296.40 98.81 0.00 0.00 0.00 0.00 0.00 00:11:20.442 [2024-11-27T06:06:31.647Z] =================================================================================================================== 00:11:20.442 [2024-11-27T06:06:31.647Z] Total : 25296.40 98.81 0.00 0.00 0.00 0.00 0.00 00:11:20.442 00:11:21.381 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:21.381 Nvme0n1 : 6.00 25357.17 99.05 0.00 0.00 0.00 0.00 0.00 00:11:21.381 [2024-11-27T06:06:32.587Z] =================================================================================================================== 00:11:21.382 [2024-11-27T06:06:32.587Z] Total : 25357.17 99.05 0.00 0.00 0.00 0.00 0.00 00:11:21.382 00:11:22.323 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:22.323 Nvme0n1 : 7.00 25403.29 99.23 0.00 0.00 0.00 0.00 0.00 00:11:22.323 [2024-11-27T06:06:33.528Z] =================================================================================================================== 00:11:22.323 [2024-11-27T06:06:33.529Z] Total : 25403.29 99.23 0.00 0.00 0.00 0.00 0.00 00:11:22.324 00:11:23.261 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:23.261 Nvme0n1 : 8.00 25441.75 99.38 0.00 0.00 0.00 0.00 0.00 00:11:23.261 [2024-11-27T06:06:34.466Z] =================================================================================================================== 00:11:23.261 [2024-11-27T06:06:34.466Z] Total : 25441.75 99.38 0.00 0.00 0.00 0.00 0.00 00:11:23.261 00:11:24.638 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:24.638 Nvme0n1 : 9.00 25466.33 99.48 0.00 0.00 0.00 0.00 0.00 00:11:24.638 [2024-11-27T06:06:35.843Z] =================================================================================================================== 00:11:24.638 [2024-11-27T06:06:35.843Z] Total : 25466.33 99.48 0.00 0.00 0.00 0.00 0.00 00:11:24.638 00:11:25.577 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:25.577 Nvme0n1 : 10.00 25485.70 99.55 0.00 0.00 0.00 0.00 0.00 00:11:25.577 [2024-11-27T06:06:36.782Z] =================================================================================================================== 00:11:25.577 [2024-11-27T06:06:36.782Z] Total : 25485.70 99.55 0.00 0.00 0.00 0.00 0.00 00:11:25.577 00:11:25.577 00:11:25.577 Latency(us) 00:11:25.577 [2024-11-27T06:06:36.782Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:25.577 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:25.577 Nvme0n1 : 10.00 25489.60 99.57 0.00 0.00 5018.04 2498.56 18022.40 00:11:25.577 [2024-11-27T06:06:36.782Z] =================================================================================================================== 00:11:25.577 [2024-11-27T06:06:36.782Z] Total : 25489.60 99.57 0.00 0.00 5018.04 2498.56 18022.40 00:11:25.577 { 00:11:25.577 "results": [ 00:11:25.577 { 00:11:25.577 "job": "Nvme0n1", 00:11:25.577 "core_mask": "0x2", 00:11:25.577 "workload": "randwrite", 00:11:25.577 "status": "finished", 00:11:25.577 "queue_depth": 128, 00:11:25.577 "io_size": 4096, 00:11:25.577 "runtime": 10.00349, 00:11:25.577 "iops": 25489.604128159273, 00:11:25.577 "mibps": 99.56876612562216, 00:11:25.577 "io_failed": 0, 00:11:25.577 "io_timeout": 0, 00:11:25.577 "avg_latency_us": 5018.035111817036, 00:11:25.577 "min_latency_us": 2498.56, 00:11:25.577 "max_latency_us": 18022.4 00:11:25.577 } 00:11:25.577 ], 00:11:25.577 "core_count": 1 00:11:25.577 } 00:11:25.577 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2210961 00:11:25.577 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2210961 ']' 00:11:25.577 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2210961 00:11:25.577 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:11:25.577 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:25.577 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2210961 00:11:25.577 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:25.577 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:25.577 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2210961' 00:11:25.577 killing process with pid 2210961 00:11:25.577 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2210961 00:11:25.577 Received shutdown signal, test time was about 10.000000 seconds 00:11:25.577 00:11:25.577 Latency(us) 00:11:25.577 [2024-11-27T06:06:36.782Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:25.577 [2024-11-27T06:06:36.782Z] =================================================================================================================== 00:11:25.577 [2024-11-27T06:06:36.782Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:25.577 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2210961 00:11:25.577 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:25.838 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:25.838 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 66ad6246-d80d-453f-8d70-91141c3c9698 00:11:25.838 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:26.098 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:26.098 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:11:26.098 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:26.358 [2024-11-27 07:06:37.355753] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:26.358 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 66ad6246-d80d-453f-8d70-91141c3c9698 00:11:26.358 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:11:26.358 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 66ad6246-d80d-453f-8d70-91141c3c9698 00:11:26.358 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:26.358 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:26.358 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:26.358 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:26.358 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:26.358 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:26.358 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:26.358 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:26.358 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 66ad6246-d80d-453f-8d70-91141c3c9698 00:11:26.358 request: 00:11:26.358 { 00:11:26.358 "uuid": "66ad6246-d80d-453f-8d70-91141c3c9698", 00:11:26.358 "method": "bdev_lvol_get_lvstores", 00:11:26.358 "req_id": 1 00:11:26.358 } 00:11:26.358 Got JSON-RPC error response 00:11:26.358 response: 00:11:26.358 { 00:11:26.358 "code": -19, 00:11:26.358 "message": "No such device" 00:11:26.358 } 00:11:26.618 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:11:26.618 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:26.618 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:26.618 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:26.618 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:26.618 aio_bdev 00:11:26.618 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ec907349-c444-4761-a3f5-a8dccee827e4 00:11:26.618 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=ec907349-c444-4761-a3f5-a8dccee827e4 00:11:26.618 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:26.618 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:11:26.618 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:26.618 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:26.618 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:26.880 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ec907349-c444-4761-a3f5-a8dccee827e4 -t 2000 00:11:26.880 [ 00:11:26.880 { 00:11:26.880 "name": "ec907349-c444-4761-a3f5-a8dccee827e4", 00:11:26.880 "aliases": [ 00:11:26.880 "lvs/lvol" 00:11:26.880 ], 00:11:26.880 "product_name": "Logical Volume", 00:11:26.880 "block_size": 4096, 00:11:26.880 "num_blocks": 38912, 00:11:26.880 "uuid": "ec907349-c444-4761-a3f5-a8dccee827e4", 00:11:26.880 "assigned_rate_limits": { 00:11:26.880 "rw_ios_per_sec": 0, 00:11:26.880 "rw_mbytes_per_sec": 0, 00:11:26.880 "r_mbytes_per_sec": 0, 00:11:26.880 "w_mbytes_per_sec": 0 00:11:26.880 }, 00:11:26.880 "claimed": false, 00:11:26.880 "zoned": false, 00:11:26.880 "supported_io_types": { 00:11:26.880 "read": true, 00:11:26.880 "write": true, 00:11:26.880 "unmap": true, 00:11:26.880 "flush": false, 00:11:26.880 "reset": true, 00:11:26.880 "nvme_admin": false, 00:11:26.880 "nvme_io": false, 00:11:26.880 "nvme_io_md": false, 00:11:26.880 "write_zeroes": true, 00:11:26.880 "zcopy": false, 00:11:26.880 "get_zone_info": false, 00:11:26.880 "zone_management": false, 00:11:26.880 "zone_append": false, 00:11:26.880 "compare": false, 00:11:26.880 "compare_and_write": false, 00:11:26.880 "abort": false, 00:11:26.880 "seek_hole": true, 00:11:26.880 "seek_data": true, 00:11:26.880 "copy": false, 00:11:26.880 "nvme_iov_md": false 00:11:26.880 }, 00:11:26.880 "driver_specific": { 00:11:26.880 "lvol": { 00:11:26.880 "lvol_store_uuid": "66ad6246-d80d-453f-8d70-91141c3c9698", 00:11:26.880 "base_bdev": "aio_bdev", 00:11:26.880 "thin_provision": false, 00:11:26.880 "num_allocated_clusters": 38, 00:11:26.880 "snapshot": false, 00:11:26.880 "clone": false, 00:11:26.880 "esnap_clone": false 00:11:26.880 } 00:11:26.880 } 00:11:26.880 } 00:11:26.880 ] 00:11:27.140 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:11:27.140 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 66ad6246-d80d-453f-8d70-91141c3c9698 00:11:27.140 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:27.140 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:27.140 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 66ad6246-d80d-453f-8d70-91141c3c9698 00:11:27.140 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:27.401 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:27.401 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ec907349-c444-4761-a3f5-a8dccee827e4 00:11:27.661 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 66ad6246-d80d-453f-8d70-91141c3c9698 00:11:27.661 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:27.922 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:27.922 00:11:27.922 real 0m15.862s 00:11:27.922 user 0m15.687s 00:11:27.922 sys 0m1.379s 00:11:27.922 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.922 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:27.922 ************************************ 00:11:27.922 END TEST lvs_grow_clean 00:11:27.922 ************************************ 00:11:27.922 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:27.922 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:27.922 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.922 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:27.922 ************************************ 00:11:27.922 START TEST lvs_grow_dirty 00:11:27.922 ************************************ 00:11:27.922 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:11:27.922 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:27.922 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:27.922 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:27.922 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:27.922 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:27.922 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:27.922 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:27.922 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:27.922 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:28.191 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:28.191 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:28.451 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=94560649-3ee8-4aa6-b863-0fb6659dfffb 00:11:28.451 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 94560649-3ee8-4aa6-b863-0fb6659dfffb 00:11:28.451 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:28.451 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:28.451 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:28.451 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 94560649-3ee8-4aa6-b863-0fb6659dfffb lvol 150 00:11:28.712 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a1b31aef-79ab-4cc9-9336-594a3820441a 00:11:28.712 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:28.712 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:28.973 [2024-11-27 07:06:39.953303] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:28.973 [2024-11-27 07:06:39.953348] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:28.973 true 00:11:28.973 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 94560649-3ee8-4aa6-b863-0fb6659dfffb 00:11:28.973 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:28.973 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:28.974 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:29.234 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a1b31aef-79ab-4cc9-9336-594a3820441a 00:11:29.495 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:29.496 [2024-11-27 07:06:40.615213] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.496 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:29.757 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2214068 00:11:29.757 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:29.757 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:29.757 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2214068 /var/tmp/bdevperf.sock 00:11:29.757 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2214068 ']' 00:11:29.757 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:29.757 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:29.757 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:29.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:29.757 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:29.757 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:29.757 [2024-11-27 07:06:40.856698] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:11:29.757 [2024-11-27 07:06:40.856752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2214068 ] 00:11:29.757 [2024-11-27 07:06:40.942410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.017 [2024-11-27 07:06:40.972336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.589 07:06:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:30.589 07:06:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:11:30.589 07:06:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:30.850 Nvme0n1 00:11:30.850 07:06:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:31.110 [ 00:11:31.110 { 00:11:31.110 "name": "Nvme0n1", 00:11:31.110 "aliases": [ 00:11:31.110 "a1b31aef-79ab-4cc9-9336-594a3820441a" 00:11:31.110 ], 00:11:31.110 "product_name": "NVMe disk", 00:11:31.110 "block_size": 4096, 00:11:31.110 "num_blocks": 38912, 00:11:31.110 "uuid": "a1b31aef-79ab-4cc9-9336-594a3820441a", 00:11:31.110 "numa_id": 0, 00:11:31.110 "assigned_rate_limits": { 00:11:31.110 "rw_ios_per_sec": 0, 00:11:31.110 "rw_mbytes_per_sec": 0, 00:11:31.110 "r_mbytes_per_sec": 0, 00:11:31.110 "w_mbytes_per_sec": 0 00:11:31.110 }, 00:11:31.110 "claimed": false, 00:11:31.110 "zoned": false, 00:11:31.110 "supported_io_types": { 00:11:31.110 "read": true, 00:11:31.110 "write": true, 00:11:31.110 "unmap": true, 00:11:31.110 "flush": true, 00:11:31.110 "reset": true, 00:11:31.110 "nvme_admin": true, 00:11:31.110 "nvme_io": true, 00:11:31.110 "nvme_io_md": false, 00:11:31.110 "write_zeroes": true, 00:11:31.110 "zcopy": false, 00:11:31.110 "get_zone_info": false, 00:11:31.110 "zone_management": false, 00:11:31.110 "zone_append": false, 00:11:31.110 "compare": true, 00:11:31.110 "compare_and_write": true, 00:11:31.110 "abort": true, 00:11:31.110 "seek_hole": false, 00:11:31.110 "seek_data": false, 00:11:31.110 "copy": true, 00:11:31.110 "nvme_iov_md": false 00:11:31.110 }, 00:11:31.110 "memory_domains": [ 00:11:31.110 { 00:11:31.110 "dma_device_id": "system", 00:11:31.110 "dma_device_type": 1 00:11:31.110 } 00:11:31.110 ], 00:11:31.110 "driver_specific": { 00:11:31.110 "nvme": [ 00:11:31.110 { 00:11:31.110 "trid": { 00:11:31.110 "trtype": "TCP", 00:11:31.110 "adrfam": "IPv4", 00:11:31.110 "traddr": "10.0.0.2", 00:11:31.110 "trsvcid": "4420", 00:11:31.110 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:31.110 }, 00:11:31.110 "ctrlr_data": { 00:11:31.110 "cntlid": 1, 00:11:31.110 "vendor_id": "0x8086", 00:11:31.110 "model_number": "SPDK bdev Controller", 00:11:31.110 "serial_number": "SPDK0", 00:11:31.110 "firmware_revision": "25.01", 00:11:31.110 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:31.110 "oacs": { 00:11:31.110 "security": 0, 00:11:31.110 "format": 0, 00:11:31.110 "firmware": 0, 00:11:31.110 "ns_manage": 0 00:11:31.110 }, 00:11:31.110 "multi_ctrlr": true, 00:11:31.110 "ana_reporting": false 00:11:31.110 }, 00:11:31.110 "vs": { 00:11:31.110 "nvme_version": "1.3" 00:11:31.110 }, 00:11:31.110 "ns_data": { 00:11:31.110 "id": 1, 00:11:31.110 "can_share": true 00:11:31.110 } 00:11:31.110 } 00:11:31.110 ], 00:11:31.110 "mp_policy": "active_passive" 00:11:31.110 } 00:11:31.110 } 00:11:31.110 ] 00:11:31.110 07:06:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2214402 00:11:31.110 07:06:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:31.110 07:06:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:31.110 Running I/O for 10 seconds... 00:11:32.494 Latency(us) 00:11:32.494 [2024-11-27T06:06:43.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:32.494 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:32.495 Nvme0n1 : 1.00 24467.00 95.57 0.00 0.00 0.00 0.00 0.00 00:11:32.495 [2024-11-27T06:06:43.700Z] =================================================================================================================== 00:11:32.495 [2024-11-27T06:06:43.700Z] Total : 24467.00 95.57 0.00 0.00 0.00 0.00 0.00 00:11:32.495 00:11:33.066 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 94560649-3ee8-4aa6-b863-0fb6659dfffb 00:11:33.327 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:33.327 Nvme0n1 : 2.00 24589.50 96.05 0.00 0.00 0.00 0.00 0.00 00:11:33.327 [2024-11-27T06:06:44.532Z] =================================================================================================================== 00:11:33.327 [2024-11-27T06:06:44.532Z] Total : 24589.50 96.05 0.00 0.00 0.00 0.00 0.00 00:11:33.327 00:11:33.327 true 00:11:33.327 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 94560649-3ee8-4aa6-b863-0fb6659dfffb 00:11:33.327 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:33.588 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:33.588 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:33.588 07:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2214402 00:11:34.159 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:34.159 Nvme0n1 : 3.00 24651.67 96.30 0.00 0.00 0.00 0.00 0.00 00:11:34.159 [2024-11-27T06:06:45.364Z] =================================================================================================================== 00:11:34.159 [2024-11-27T06:06:45.364Z] Total : 24651.67 96.30 0.00 0.00 0.00 0.00 0.00 00:11:34.159 00:11:35.544 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:35.544 Nvme0n1 : 4.00 24702.75 96.50 0.00 0.00 0.00 0.00 0.00 00:11:35.544 [2024-11-27T06:06:46.749Z] =================================================================================================================== 00:11:35.544 [2024-11-27T06:06:46.749Z] Total : 24702.75 96.50 0.00 0.00 0.00 0.00 0.00 00:11:35.544 00:11:36.116 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:36.116 Nvme0n1 : 5.00 24736.60 96.63 0.00 0.00 0.00 0.00 0.00 00:11:36.116 [2024-11-27T06:06:47.321Z] =================================================================================================================== 00:11:36.116 [2024-11-27T06:06:47.321Z] Total : 24736.60 96.63 0.00 0.00 0.00 0.00 0.00 00:11:36.116 00:11:37.509 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:37.509 Nvme0n1 : 6.00 24767.17 96.75 0.00 0.00 0.00 0.00 0.00 00:11:37.509 [2024-11-27T06:06:48.714Z] =================================================================================================================== 00:11:37.509 [2024-11-27T06:06:48.714Z] Total : 24767.17 96.75 0.00 0.00 0.00 0.00 0.00 00:11:37.509 00:11:38.451 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:38.451 Nvme0n1 : 7.00 24792.43 96.85 0.00 0.00 0.00 0.00 0.00 00:11:38.451 [2024-11-27T06:06:49.656Z] =================================================================================================================== 00:11:38.451 [2024-11-27T06:06:49.656Z] Total : 24792.43 96.85 0.00 0.00 0.00 0.00 0.00 00:11:38.451 00:11:39.394 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:39.394 Nvme0n1 : 8.00 24805.38 96.90 0.00 0.00 0.00 0.00 0.00 00:11:39.394 [2024-11-27T06:06:50.599Z] =================================================================================================================== 00:11:39.394 [2024-11-27T06:06:50.599Z] Total : 24805.38 96.90 0.00 0.00 0.00 0.00 0.00 00:11:39.394 00:11:40.335 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:40.335 Nvme0n1 : 9.00 24819.89 96.95 0.00 0.00 0.00 0.00 0.00 00:11:40.335 [2024-11-27T06:06:51.540Z] =================================================================================================================== 00:11:40.335 [2024-11-27T06:06:51.540Z] Total : 24819.89 96.95 0.00 0.00 0.00 0.00 0.00 00:11:40.335 00:11:41.277 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:41.277 Nvme0n1 : 10.00 24832.30 97.00 0.00 0.00 0.00 0.00 0.00 00:11:41.277 [2024-11-27T06:06:52.482Z] =================================================================================================================== 00:11:41.277 [2024-11-27T06:06:52.482Z] Total : 24832.30 97.00 0.00 0.00 0.00 0.00 0.00 00:11:41.277 00:11:41.277 00:11:41.277 Latency(us) 00:11:41.277 [2024-11-27T06:06:52.482Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:41.277 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:41.277 Nvme0n1 : 10.00 24832.00 97.00 0.00 0.00 5150.95 3959.47 10594.99 00:11:41.277 [2024-11-27T06:06:52.482Z] =================================================================================================================== 00:11:41.277 [2024-11-27T06:06:52.482Z] Total : 24832.00 97.00 0.00 0.00 5150.95 3959.47 10594.99 00:11:41.277 { 00:11:41.277 "results": [ 00:11:41.277 { 00:11:41.277 "job": "Nvme0n1", 00:11:41.277 "core_mask": "0x2", 00:11:41.277 "workload": "randwrite", 00:11:41.277 "status": "finished", 00:11:41.277 "queue_depth": 128, 00:11:41.277 "io_size": 4096, 00:11:41.277 "runtime": 10.004955, 00:11:41.277 "iops": 24831.995746107805, 00:11:41.277 "mibps": 96.99998338323361, 00:11:41.277 "io_failed": 0, 00:11:41.277 "io_timeout": 0, 00:11:41.277 "avg_latency_us": 5150.954124581225, 00:11:41.277 "min_latency_us": 3959.4666666666667, 00:11:41.277 "max_latency_us": 10594.986666666666 00:11:41.277 } 00:11:41.277 ], 00:11:41.277 "core_count": 1 00:11:41.277 } 00:11:41.277 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2214068 00:11:41.277 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2214068 ']' 00:11:41.277 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2214068 00:11:41.277 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:11:41.277 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:41.277 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2214068 00:11:41.277 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:41.277 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:41.277 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2214068' 00:11:41.277 killing process with pid 2214068 00:11:41.277 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2214068 00:11:41.277 Received shutdown signal, test time was about 10.000000 seconds 00:11:41.277 00:11:41.277 Latency(us) 00:11:41.277 [2024-11-27T06:06:52.482Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:41.277 [2024-11-27T06:06:52.482Z] =================================================================================================================== 00:11:41.277 [2024-11-27T06:06:52.482Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:41.278 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2214068 00:11:41.538 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:41.538 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:41.799 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 94560649-3ee8-4aa6-b863-0fb6659dfffb 00:11:41.799 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:42.060 07:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:42.060 07:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:11:42.060 07:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2210257 00:11:42.060 07:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2210257 00:11:42.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2210257 Killed "${NVMF_APP[@]}" "$@" 00:11:42.060 07:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:11:42.060 07:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:11:42.060 07:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:42.060 07:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:42.060 07:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:42.060 07:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2216437 00:11:42.060 07:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2216437 00:11:42.060 07:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2216437 ']' 00:11:42.060 07:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.060 07:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:42.060 07:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:42.060 07:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.060 07:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:42.060 07:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:42.060 [2024-11-27 07:06:53.111186] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:11:42.060 [2024-11-27 07:06:53.111243] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.060 [2024-11-27 07:06:53.202219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.060 [2024-11-27 07:06:53.231783] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:42.060 [2024-11-27 07:06:53.231809] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:42.060 [2024-11-27 07:06:53.231815] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:42.060 [2024-11-27 07:06:53.231820] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:42.060 [2024-11-27 07:06:53.231824] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:42.060 [2024-11-27 07:06:53.232261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.002 07:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:43.002 07:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:11:43.002 07:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:43.002 07:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:43.002 07:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:43.003 07:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.003 07:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:43.003 [2024-11-27 07:06:54.111093] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:43.003 [2024-11-27 07:06:54.111174] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:43.003 [2024-11-27 07:06:54.111197] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:43.003 07:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:11:43.003 07:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a1b31aef-79ab-4cc9-9336-594a3820441a 00:11:43.003 07:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a1b31aef-79ab-4cc9-9336-594a3820441a 00:11:43.003 07:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:43.003 07:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:11:43.003 07:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:43.003 07:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:43.003 07:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:43.263 07:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a1b31aef-79ab-4cc9-9336-594a3820441a -t 2000 00:11:43.263 [ 00:11:43.263 { 00:11:43.263 "name": "a1b31aef-79ab-4cc9-9336-594a3820441a", 00:11:43.263 "aliases": [ 00:11:43.263 "lvs/lvol" 00:11:43.263 ], 00:11:43.263 "product_name": "Logical Volume", 00:11:43.263 "block_size": 4096, 00:11:43.263 "num_blocks": 38912, 00:11:43.263 "uuid": "a1b31aef-79ab-4cc9-9336-594a3820441a", 00:11:43.263 "assigned_rate_limits": { 00:11:43.263 "rw_ios_per_sec": 0, 00:11:43.263 "rw_mbytes_per_sec": 0, 00:11:43.263 "r_mbytes_per_sec": 0, 00:11:43.263 "w_mbytes_per_sec": 0 00:11:43.263 }, 00:11:43.263 "claimed": false, 00:11:43.263 "zoned": false, 00:11:43.263 "supported_io_types": { 00:11:43.263 "read": true, 00:11:43.263 "write": true, 00:11:43.263 "unmap": true, 00:11:43.263 "flush": false, 00:11:43.263 "reset": true, 00:11:43.263 "nvme_admin": false, 00:11:43.263 "nvme_io": false, 00:11:43.263 "nvme_io_md": false, 00:11:43.263 "write_zeroes": true, 00:11:43.263 "zcopy": false, 00:11:43.263 "get_zone_info": false, 00:11:43.263 "zone_management": false, 00:11:43.263 "zone_append": false, 00:11:43.263 "compare": false, 00:11:43.263 "compare_and_write": false, 00:11:43.263 "abort": false, 00:11:43.263 "seek_hole": true, 00:11:43.263 "seek_data": true, 00:11:43.263 "copy": false, 00:11:43.263 "nvme_iov_md": false 00:11:43.263 }, 00:11:43.263 "driver_specific": { 00:11:43.263 "lvol": { 00:11:43.263 "lvol_store_uuid": "94560649-3ee8-4aa6-b863-0fb6659dfffb", 00:11:43.263 "base_bdev": "aio_bdev", 00:11:43.263 "thin_provision": false, 00:11:43.263 "num_allocated_clusters": 38, 00:11:43.263 "snapshot": false, 00:11:43.263 "clone": false, 00:11:43.263 "esnap_clone": false 00:11:43.263 } 00:11:43.263 } 00:11:43.263 } 00:11:43.263 ] 00:11:43.263 07:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:11:43.263 07:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 94560649-3ee8-4aa6-b863-0fb6659dfffb 00:11:43.263 07:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:11:43.523 07:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:11:43.523 07:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 94560649-3ee8-4aa6-b863-0fb6659dfffb 00:11:43.523 07:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:11:43.783 07:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:11:43.783 07:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:43.783 [2024-11-27 07:06:54.939641] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:43.783 07:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 94560649-3ee8-4aa6-b863-0fb6659dfffb 00:11:43.783 07:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:11:44.044 07:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 94560649-3ee8-4aa6-b863-0fb6659dfffb 00:11:44.044 07:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:44.044 07:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:44.044 07:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:44.044 07:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:44.044 07:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:44.044 07:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:44.044 07:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:44.044 07:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:44.044 07:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 94560649-3ee8-4aa6-b863-0fb6659dfffb 00:11:44.044 request: 00:11:44.044 { 00:11:44.044 "uuid": "94560649-3ee8-4aa6-b863-0fb6659dfffb", 00:11:44.044 "method": "bdev_lvol_get_lvstores", 00:11:44.044 "req_id": 1 00:11:44.044 } 00:11:44.044 Got JSON-RPC error response 00:11:44.044 response: 00:11:44.044 { 00:11:44.044 "code": -19, 00:11:44.044 "message": "No such device" 00:11:44.044 } 00:11:44.044 07:06:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:11:44.044 07:06:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:44.044 07:06:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:44.044 07:06:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:44.044 07:06:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:44.306 aio_bdev 00:11:44.306 07:06:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a1b31aef-79ab-4cc9-9336-594a3820441a 00:11:44.306 07:06:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a1b31aef-79ab-4cc9-9336-594a3820441a 00:11:44.306 07:06:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:44.306 07:06:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:11:44.306 07:06:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:44.306 07:06:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:44.306 07:06:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:44.306 07:06:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a1b31aef-79ab-4cc9-9336-594a3820441a -t 2000 00:11:44.567 [ 00:11:44.567 { 00:11:44.567 "name": "a1b31aef-79ab-4cc9-9336-594a3820441a", 00:11:44.567 "aliases": [ 00:11:44.567 "lvs/lvol" 00:11:44.567 ], 00:11:44.567 "product_name": "Logical Volume", 00:11:44.567 "block_size": 4096, 00:11:44.567 "num_blocks": 38912, 00:11:44.567 "uuid": "a1b31aef-79ab-4cc9-9336-594a3820441a", 00:11:44.567 "assigned_rate_limits": { 00:11:44.567 "rw_ios_per_sec": 0, 00:11:44.567 "rw_mbytes_per_sec": 0, 00:11:44.567 "r_mbytes_per_sec": 0, 00:11:44.567 "w_mbytes_per_sec": 0 00:11:44.567 }, 00:11:44.567 "claimed": false, 00:11:44.567 "zoned": false, 00:11:44.567 "supported_io_types": { 00:11:44.567 "read": true, 00:11:44.567 "write": true, 00:11:44.567 "unmap": true, 00:11:44.567 "flush": false, 00:11:44.567 "reset": true, 00:11:44.567 "nvme_admin": false, 00:11:44.567 "nvme_io": false, 00:11:44.567 "nvme_io_md": false, 00:11:44.567 "write_zeroes": true, 00:11:44.567 "zcopy": false, 00:11:44.567 "get_zone_info": false, 00:11:44.567 "zone_management": false, 00:11:44.567 "zone_append": false, 00:11:44.567 "compare": false, 00:11:44.567 "compare_and_write": false, 00:11:44.567 "abort": false, 00:11:44.567 "seek_hole": true, 00:11:44.567 "seek_data": true, 00:11:44.567 "copy": false, 00:11:44.567 "nvme_iov_md": false 00:11:44.567 }, 00:11:44.567 "driver_specific": { 00:11:44.567 "lvol": { 00:11:44.567 "lvol_store_uuid": "94560649-3ee8-4aa6-b863-0fb6659dfffb", 00:11:44.567 "base_bdev": "aio_bdev", 00:11:44.567 "thin_provision": false, 00:11:44.567 "num_allocated_clusters": 38, 00:11:44.567 "snapshot": false, 00:11:44.567 "clone": false, 00:11:44.567 "esnap_clone": false 00:11:44.567 } 00:11:44.567 } 00:11:44.567 } 00:11:44.567 ] 00:11:44.567 07:06:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:11:44.567 07:06:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 94560649-3ee8-4aa6-b863-0fb6659dfffb 00:11:44.567 07:06:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:44.828 07:06:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:44.828 07:06:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 94560649-3ee8-4aa6-b863-0fb6659dfffb 00:11:44.828 07:06:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:44.828 07:06:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:44.828 07:06:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a1b31aef-79ab-4cc9-9336-594a3820441a 00:11:45.089 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 94560649-3ee8-4aa6-b863-0fb6659dfffb 00:11:45.350 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:45.350 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:45.350 00:11:45.350 real 0m17.420s 00:11:45.350 user 0m45.484s 00:11:45.350 sys 0m3.368s 00:11:45.350 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.350 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:45.350 ************************************ 00:11:45.350 END TEST lvs_grow_dirty 00:11:45.350 ************************************ 00:11:45.350 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:45.350 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:11:45.350 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:11:45.350 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:11:45.350 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:45.350 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:11:45.350 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:11:45.350 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:11:45.350 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:45.350 nvmf_trace.0 00:11:45.611 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:11:45.611 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:45.611 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:45.611 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:11:45.611 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:45.611 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:11:45.611 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:45.611 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:45.611 rmmod nvme_tcp 00:11:45.611 rmmod nvme_fabrics 00:11:45.611 rmmod nvme_keyring 00:11:45.611 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:45.611 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:11:45.612 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:11:45.612 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2216437 ']' 00:11:45.612 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2216437 00:11:45.612 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2216437 ']' 00:11:45.612 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2216437 00:11:45.612 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:11:45.612 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:45.612 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2216437 00:11:45.612 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:45.612 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:45.612 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2216437' 00:11:45.612 killing process with pid 2216437 00:11:45.612 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2216437 00:11:45.612 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2216437 00:11:45.873 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:45.873 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:45.873 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:45.873 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:11:45.873 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:11:45.873 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:45.873 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:11:45.873 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:45.873 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:45.873 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.873 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:45.873 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.787 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:47.787 00:11:47.787 real 0m44.650s 00:11:47.787 user 1m7.473s 00:11:47.787 sys 0m10.848s 00:11:47.787 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.787 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:47.787 ************************************ 00:11:47.787 END TEST nvmf_lvs_grow 00:11:47.787 ************************************ 00:11:47.787 07:06:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:47.787 07:06:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:47.787 07:06:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.787 07:06:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:48.048 ************************************ 00:11:48.048 START TEST nvmf_bdev_io_wait 00:11:48.048 ************************************ 00:11:48.048 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:48.048 * Looking for test storage... 00:11:48.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.048 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:48.048 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:11:48.048 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:48.048 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:48.048 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:48.048 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:48.048 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:48.048 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:11:48.048 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:11:48.048 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:11:48.048 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:11:48.048 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:11:48.048 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:48.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.049 --rc genhtml_branch_coverage=1 00:11:48.049 --rc genhtml_function_coverage=1 00:11:48.049 --rc genhtml_legend=1 00:11:48.049 --rc geninfo_all_blocks=1 00:11:48.049 --rc geninfo_unexecuted_blocks=1 00:11:48.049 00:11:48.049 ' 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:48.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.049 --rc genhtml_branch_coverage=1 00:11:48.049 --rc genhtml_function_coverage=1 00:11:48.049 --rc genhtml_legend=1 00:11:48.049 --rc geninfo_all_blocks=1 00:11:48.049 --rc geninfo_unexecuted_blocks=1 00:11:48.049 00:11:48.049 ' 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:48.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.049 --rc genhtml_branch_coverage=1 00:11:48.049 --rc genhtml_function_coverage=1 00:11:48.049 --rc genhtml_legend=1 00:11:48.049 --rc geninfo_all_blocks=1 00:11:48.049 --rc geninfo_unexecuted_blocks=1 00:11:48.049 00:11:48.049 ' 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:48.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.049 --rc genhtml_branch_coverage=1 00:11:48.049 --rc genhtml_function_coverage=1 00:11:48.049 --rc genhtml_legend=1 00:11:48.049 --rc geninfo_all_blocks=1 00:11:48.049 --rc geninfo_unexecuted_blocks=1 00:11:48.049 00:11:48.049 ' 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:48.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:48.049 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:11:48.050 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:56.373 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:56.373 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:56.373 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:56.373 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:56.374 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:56.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:56.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:11:56.374 00:11:56.374 --- 10.0.0.2 ping statistics --- 00:11:56.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.374 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:56.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:56.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:11:56.374 00:11:56.374 --- 10.0.0.1 ping statistics --- 00:11:56.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.374 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2221513 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2221513 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2221513 ']' 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:56.374 07:07:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:56.374 [2024-11-27 07:07:06.846688] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:11:56.374 [2024-11-27 07:07:06.846755] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:56.374 [2024-11-27 07:07:06.945882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:56.374 [2024-11-27 07:07:07.000390] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:56.374 [2024-11-27 07:07:07.000445] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:56.374 [2024-11-27 07:07:07.000453] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:56.374 [2024-11-27 07:07:07.000460] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:56.374 [2024-11-27 07:07:07.000467] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:56.374 [2024-11-27 07:07:07.002712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:56.374 [2024-11-27 07:07:07.002872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:56.374 [2024-11-27 07:07:07.003031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:56.374 [2024-11-27 07:07:07.003032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.635 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:56.635 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:11:56.635 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:56.635 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:56.635 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:56.635 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:56.635 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:56.635 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.635 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:56.635 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.635 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:56.635 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.635 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:56.635 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.635 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:56.635 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.635 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:56.635 [2024-11-27 07:07:07.806270] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:56.635 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.635 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:56.635 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.635 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:56.897 Malloc0 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:56.897 [2024-11-27 07:07:07.871779] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2221866 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2221868 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:56.897 { 00:11:56.897 "params": { 00:11:56.897 "name": "Nvme$subsystem", 00:11:56.897 "trtype": "$TEST_TRANSPORT", 00:11:56.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:56.897 "adrfam": "ipv4", 00:11:56.897 "trsvcid": "$NVMF_PORT", 00:11:56.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:56.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:56.897 "hdgst": ${hdgst:-false}, 00:11:56.897 "ddgst": ${ddgst:-false} 00:11:56.897 }, 00:11:56.897 "method": "bdev_nvme_attach_controller" 00:11:56.897 } 00:11:56.897 EOF 00:11:56.897 )") 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2221870 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:56.897 { 00:11:56.897 "params": { 00:11:56.897 "name": "Nvme$subsystem", 00:11:56.897 "trtype": "$TEST_TRANSPORT", 00:11:56.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:56.897 "adrfam": "ipv4", 00:11:56.897 "trsvcid": "$NVMF_PORT", 00:11:56.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:56.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:56.897 "hdgst": ${hdgst:-false}, 00:11:56.897 "ddgst": ${ddgst:-false} 00:11:56.897 }, 00:11:56.897 "method": "bdev_nvme_attach_controller" 00:11:56.897 } 00:11:56.897 EOF 00:11:56.897 )") 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2221873 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:56.897 { 00:11:56.897 "params": { 00:11:56.897 "name": "Nvme$subsystem", 00:11:56.897 "trtype": "$TEST_TRANSPORT", 00:11:56.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:56.897 "adrfam": "ipv4", 00:11:56.897 "trsvcid": "$NVMF_PORT", 00:11:56.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:56.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:56.897 "hdgst": ${hdgst:-false}, 00:11:56.897 "ddgst": ${ddgst:-false} 00:11:56.897 }, 00:11:56.897 "method": "bdev_nvme_attach_controller" 00:11:56.897 } 00:11:56.897 EOF 00:11:56.897 )") 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:56.897 { 00:11:56.897 "params": { 00:11:56.897 "name": "Nvme$subsystem", 00:11:56.897 "trtype": "$TEST_TRANSPORT", 00:11:56.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:56.897 "adrfam": "ipv4", 00:11:56.897 "trsvcid": "$NVMF_PORT", 00:11:56.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:56.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:56.897 "hdgst": ${hdgst:-false}, 00:11:56.897 "ddgst": ${ddgst:-false} 00:11:56.897 }, 00:11:56.897 "method": "bdev_nvme_attach_controller" 00:11:56.897 } 00:11:56.897 EOF 00:11:56.897 )") 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2221866 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:56.897 "params": { 00:11:56.897 "name": "Nvme1", 00:11:56.897 "trtype": "tcp", 00:11:56.897 "traddr": "10.0.0.2", 00:11:56.897 "adrfam": "ipv4", 00:11:56.897 "trsvcid": "4420", 00:11:56.897 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:56.897 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:56.897 "hdgst": false, 00:11:56.897 "ddgst": false 00:11:56.897 }, 00:11:56.897 "method": "bdev_nvme_attach_controller" 00:11:56.897 }' 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:56.897 "params": { 00:11:56.897 "name": "Nvme1", 00:11:56.897 "trtype": "tcp", 00:11:56.897 "traddr": "10.0.0.2", 00:11:56.897 "adrfam": "ipv4", 00:11:56.897 "trsvcid": "4420", 00:11:56.897 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:56.897 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:56.897 "hdgst": false, 00:11:56.897 "ddgst": false 00:11:56.897 }, 00:11:56.897 "method": "bdev_nvme_attach_controller" 00:11:56.897 }' 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:56.897 "params": { 00:11:56.897 "name": "Nvme1", 00:11:56.897 "trtype": "tcp", 00:11:56.897 "traddr": "10.0.0.2", 00:11:56.897 "adrfam": "ipv4", 00:11:56.897 "trsvcid": "4420", 00:11:56.897 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:56.897 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:56.897 "hdgst": false, 00:11:56.897 "ddgst": false 00:11:56.897 }, 00:11:56.897 "method": "bdev_nvme_attach_controller" 00:11:56.897 }' 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:56.897 07:07:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:56.897 "params": { 00:11:56.897 "name": "Nvme1", 00:11:56.897 "trtype": "tcp", 00:11:56.897 "traddr": "10.0.0.2", 00:11:56.897 "adrfam": "ipv4", 00:11:56.897 "trsvcid": "4420", 00:11:56.897 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:56.897 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:56.897 "hdgst": false, 00:11:56.897 "ddgst": false 00:11:56.897 }, 00:11:56.897 "method": "bdev_nvme_attach_controller" 00:11:56.897 }' 00:11:56.897 [2024-11-27 07:07:07.930321] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:11:56.897 [2024-11-27 07:07:07.930335] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:11:56.898 [2024-11-27 07:07:07.930395] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:11:56.898 [2024-11-27 07:07:07.930400] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:56.898 [2024-11-27 07:07:07.933157] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:11:56.898 [2024-11-27 07:07:07.933224] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:56.898 [2024-11-27 07:07:07.937703] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:11:56.898 [2024-11-27 07:07:07.937786] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:57.158 [2024-11-27 07:07:08.149625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.158 [2024-11-27 07:07:08.188974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:11:57.158 [2024-11-27 07:07:08.242204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.158 [2024-11-27 07:07:08.281187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:57.158 [2024-11-27 07:07:08.336084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.419 [2024-11-27 07:07:08.378974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:57.419 [2024-11-27 07:07:08.406899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.419 [2024-11-27 07:07:08.443513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:57.419 Running I/O for 1 seconds... 00:11:57.419 Running I/O for 1 seconds... 00:11:57.419 Running I/O for 1 seconds... 00:11:57.679 Running I/O for 1 seconds... 00:11:58.619 11216.00 IOPS, 43.81 MiB/s 00:11:58.619 Latency(us) 00:11:58.619 [2024-11-27T06:07:09.824Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:58.619 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:58.619 Nvme1n1 : 1.01 11274.26 44.04 0.00 0.00 11310.65 5625.17 16056.32 00:11:58.619 [2024-11-27T06:07:09.824Z] =================================================================================================================== 00:11:58.619 [2024-11-27T06:07:09.824Z] Total : 11274.26 44.04 0.00 0.00 11310.65 5625.17 16056.32 00:11:58.619 9778.00 IOPS, 38.20 MiB/s [2024-11-27T06:07:09.824Z] 183472.00 IOPS, 716.69 MiB/s 00:11:58.619 Latency(us) 00:11:58.619 [2024-11-27T06:07:09.824Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:58.619 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:58.619 Nvme1n1 : 1.01 9847.79 38.47 0.00 0.00 12948.45 6062.08 21517.65 00:11:58.619 [2024-11-27T06:07:09.824Z] =================================================================================================================== 00:11:58.619 [2024-11-27T06:07:09.824Z] Total : 9847.79 38.47 0.00 0.00 12948.45 6062.08 21517.65 00:11:58.619 00:11:58.619 Latency(us) 00:11:58.619 [2024-11-27T06:07:09.824Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:58.619 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:58.619 Nvme1n1 : 1.00 183107.19 715.26 0.00 0.00 695.22 296.96 1966.08 00:11:58.619 [2024-11-27T06:07:09.824Z] =================================================================================================================== 00:11:58.619 [2024-11-27T06:07:09.824Z] Total : 183107.19 715.26 0.00 0.00 695.22 296.96 1966.08 00:11:58.619 9464.00 IOPS, 36.97 MiB/s 00:11:58.619 Latency(us) 00:11:58.619 [2024-11-27T06:07:09.824Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:58.619 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:58.619 Nvme1n1 : 1.01 9527.33 37.22 0.00 0.00 13387.45 5652.48 22173.01 00:11:58.619 [2024-11-27T06:07:09.824Z] =================================================================================================================== 00:11:58.619 [2024-11-27T06:07:09.824Z] Total : 9527.33 37.22 0.00 0.00 13387.45 5652.48 22173.01 00:11:58.619 07:07:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2221868 00:11:58.619 07:07:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2221870 00:11:58.619 07:07:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2221873 00:11:58.619 07:07:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:58.619 07:07:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.619 07:07:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:58.619 07:07:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.619 07:07:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:58.619 07:07:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:58.619 07:07:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:58.619 07:07:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:11:58.619 07:07:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:58.619 07:07:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:11:58.619 07:07:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:58.619 07:07:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:58.619 rmmod nvme_tcp 00:11:58.880 rmmod nvme_fabrics 00:11:58.880 rmmod nvme_keyring 00:11:58.880 07:07:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:58.880 07:07:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:11:58.880 07:07:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:11:58.880 07:07:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2221513 ']' 00:11:58.880 07:07:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2221513 00:11:58.880 07:07:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2221513 ']' 00:11:58.880 07:07:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2221513 00:11:58.880 07:07:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:11:58.880 07:07:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:58.880 07:07:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2221513 00:11:58.880 07:07:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:58.880 07:07:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:58.880 07:07:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2221513' 00:11:58.880 killing process with pid 2221513 00:11:58.880 07:07:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2221513 00:11:58.880 07:07:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2221513 00:11:59.140 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:59.140 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:59.140 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:59.140 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:11:59.140 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:11:59.140 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:59.140 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:11:59.140 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:59.140 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:59.140 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.140 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.140 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.051 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:01.051 00:12:01.051 real 0m13.190s 00:12:01.051 user 0m19.783s 00:12:01.051 sys 0m7.483s 00:12:01.051 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.051 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:01.051 ************************************ 00:12:01.051 END TEST nvmf_bdev_io_wait 00:12:01.051 ************************************ 00:12:01.051 07:07:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:01.051 07:07:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:01.051 07:07:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.051 07:07:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:01.314 ************************************ 00:12:01.314 START TEST nvmf_queue_depth 00:12:01.314 ************************************ 00:12:01.314 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:01.314 * Looking for test storage... 00:12:01.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:01.314 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:01.314 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:12:01.314 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:01.314 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:01.314 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:01.314 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:01.314 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:01.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.315 --rc genhtml_branch_coverage=1 00:12:01.315 --rc genhtml_function_coverage=1 00:12:01.315 --rc genhtml_legend=1 00:12:01.315 --rc geninfo_all_blocks=1 00:12:01.315 --rc geninfo_unexecuted_blocks=1 00:12:01.315 00:12:01.315 ' 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:01.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.315 --rc genhtml_branch_coverage=1 00:12:01.315 --rc genhtml_function_coverage=1 00:12:01.315 --rc genhtml_legend=1 00:12:01.315 --rc geninfo_all_blocks=1 00:12:01.315 --rc geninfo_unexecuted_blocks=1 00:12:01.315 00:12:01.315 ' 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:01.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.315 --rc genhtml_branch_coverage=1 00:12:01.315 --rc genhtml_function_coverage=1 00:12:01.315 --rc genhtml_legend=1 00:12:01.315 --rc geninfo_all_blocks=1 00:12:01.315 --rc geninfo_unexecuted_blocks=1 00:12:01.315 00:12:01.315 ' 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:01.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.315 --rc genhtml_branch_coverage=1 00:12:01.315 --rc genhtml_function_coverage=1 00:12:01.315 --rc genhtml_legend=1 00:12:01.315 --rc geninfo_all_blocks=1 00:12:01.315 --rc geninfo_unexecuted_blocks=1 00:12:01.315 00:12:01.315 ' 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:01.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:01.315 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:01.316 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.316 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:01.316 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.576 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:01.576 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:01.576 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:12:01.576 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:09.716 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:09.716 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:12:09.716 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:09.716 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:09.716 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:09.716 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:09.716 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:09.716 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:12:09.716 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:09.716 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:12:09.716 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:09.717 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:09.717 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:09.717 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:09.717 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:09.717 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:09.718 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:09.718 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:09.718 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:09.718 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:09.718 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:09.718 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:09.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:09.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.588 ms 00:12:09.718 00:12:09.718 --- 10.0.0.2 ping statistics --- 00:12:09.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.718 rtt min/avg/max/mdev = 0.588/0.588/0.588/0.000 ms 00:12:09.718 07:07:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:09.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:09.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:12:09.718 00:12:09.718 --- 10.0.0.1 ping statistics --- 00:12:09.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.718 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:12:09.718 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:09.718 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:12:09.718 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:09.718 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:09.718 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:09.718 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:09.718 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:09.718 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:09.718 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:09.718 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:09.718 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:09.718 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:09.718 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:09.718 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2226571 00:12:09.718 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2226571 00:12:09.718 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:09.718 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2226571 ']' 00:12:09.718 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.718 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.718 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.718 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.718 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:09.718 [2024-11-27 07:07:20.119676] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:12:09.718 [2024-11-27 07:07:20.119744] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.718 [2024-11-27 07:07:20.223570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.718 [2024-11-27 07:07:20.273694] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:09.718 [2024-11-27 07:07:20.273750] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:09.718 [2024-11-27 07:07:20.273759] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:09.718 [2024-11-27 07:07:20.273767] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:09.718 [2024-11-27 07:07:20.273773] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:09.718 [2024-11-27 07:07:20.274540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.980 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:09.980 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:12:09.980 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:09.980 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:09.980 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:09.980 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:09.980 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:09.980 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.980 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:09.980 [2024-11-27 07:07:21.001907] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:09.980 07:07:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.980 07:07:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:09.980 07:07:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.980 07:07:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:09.980 Malloc0 00:12:09.980 07:07:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.980 07:07:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:09.980 07:07:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.980 07:07:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:09.980 07:07:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.980 07:07:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:09.980 07:07:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.980 07:07:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:09.980 07:07:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.980 07:07:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:09.980 07:07:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.980 07:07:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:09.980 [2024-11-27 07:07:21.063050] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:09.980 07:07:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.980 07:07:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2226608 00:12:09.980 07:07:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:09.980 07:07:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:09.980 07:07:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2226608 /var/tmp/bdevperf.sock 00:12:09.980 07:07:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2226608 ']' 00:12:09.980 07:07:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:09.980 07:07:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.980 07:07:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:09.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:09.980 07:07:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.980 07:07:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:09.980 [2024-11-27 07:07:21.122565] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:12:09.980 [2024-11-27 07:07:21.122633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2226608 ] 00:12:10.243 [2024-11-27 07:07:21.215593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.243 [2024-11-27 07:07:21.269425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.815 07:07:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:10.815 07:07:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:12:10.815 07:07:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:10.815 07:07:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.815 07:07:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:11.076 NVMe0n1 00:12:11.076 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.076 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:11.076 Running I/O for 10 seconds... 00:12:13.401 8199.00 IOPS, 32.03 MiB/s [2024-11-27T06:07:25.548Z] 9979.50 IOPS, 38.98 MiB/s [2024-11-27T06:07:26.490Z] 10576.67 IOPS, 41.32 MiB/s [2024-11-27T06:07:27.430Z] 11230.25 IOPS, 43.87 MiB/s [2024-11-27T06:07:28.372Z] 11672.80 IOPS, 45.60 MiB/s [2024-11-27T06:07:29.313Z] 11948.00 IOPS, 46.67 MiB/s [2024-11-27T06:07:30.697Z] 12235.00 IOPS, 47.79 MiB/s [2024-11-27T06:07:31.638Z] 12410.00 IOPS, 48.48 MiB/s [2024-11-27T06:07:32.581Z] 12518.00 IOPS, 48.90 MiB/s [2024-11-27T06:07:32.581Z] 12627.50 IOPS, 49.33 MiB/s 00:12:21.376 Latency(us) 00:12:21.376 [2024-11-27T06:07:32.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:21.376 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:21.376 Verification LBA range: start 0x0 length 0x4000 00:12:21.376 NVMe0n1 : 10.04 12661.79 49.46 0.00 0.00 80577.24 9065.81 74274.13 00:12:21.376 [2024-11-27T06:07:32.581Z] =================================================================================================================== 00:12:21.376 [2024-11-27T06:07:32.581Z] Total : 12661.79 49.46 0.00 0.00 80577.24 9065.81 74274.13 00:12:21.376 { 00:12:21.376 "results": [ 00:12:21.376 { 00:12:21.376 "job": "NVMe0n1", 00:12:21.376 "core_mask": "0x1", 00:12:21.376 "workload": "verify", 00:12:21.376 "status": "finished", 00:12:21.376 "verify_range": { 00:12:21.376 "start": 0, 00:12:21.376 "length": 16384 00:12:21.376 }, 00:12:21.376 "queue_depth": 1024, 00:12:21.376 "io_size": 4096, 00:12:21.376 "runtime": 10.044155, 00:12:21.376 "iops": 12661.791858050778, 00:12:21.376 "mibps": 49.46012444551085, 00:12:21.376 "io_failed": 0, 00:12:21.376 "io_timeout": 0, 00:12:21.376 "avg_latency_us": 80577.23524222148, 00:12:21.376 "min_latency_us": 9065.813333333334, 00:12:21.376 "max_latency_us": 74274.13333333333 00:12:21.376 } 00:12:21.376 ], 00:12:21.376 "core_count": 1 00:12:21.376 } 00:12:21.376 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2226608 00:12:21.376 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2226608 ']' 00:12:21.376 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2226608 00:12:21.376 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:12:21.376 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:21.376 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2226608 00:12:21.376 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:21.376 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:21.376 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2226608' 00:12:21.376 killing process with pid 2226608 00:12:21.376 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2226608 00:12:21.376 Received shutdown signal, test time was about 10.000000 seconds 00:12:21.376 00:12:21.376 Latency(us) 00:12:21.376 [2024-11-27T06:07:32.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:21.376 [2024-11-27T06:07:32.581Z] =================================================================================================================== 00:12:21.376 [2024-11-27T06:07:32.581Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:21.376 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2226608 00:12:21.376 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:21.376 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:21.376 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:21.376 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:12:21.376 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:21.376 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:12:21.376 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:21.376 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:21.376 rmmod nvme_tcp 00:12:21.376 rmmod nvme_fabrics 00:12:21.376 rmmod nvme_keyring 00:12:21.636 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:21.636 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:12:21.636 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:12:21.636 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2226571 ']' 00:12:21.636 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2226571 00:12:21.636 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2226571 ']' 00:12:21.636 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2226571 00:12:21.636 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:12:21.636 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:21.636 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2226571 00:12:21.636 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:21.636 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:21.636 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2226571' 00:12:21.636 killing process with pid 2226571 00:12:21.636 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2226571 00:12:21.636 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2226571 00:12:21.636 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:21.636 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:21.636 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:21.636 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:12:21.636 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:12:21.636 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:21.636 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:12:21.636 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:21.636 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:21.636 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.636 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:21.636 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.186 07:07:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:24.186 00:12:24.186 real 0m22.588s 00:12:24.186 user 0m25.871s 00:12:24.186 sys 0m7.123s 00:12:24.186 07:07:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:24.186 07:07:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:24.186 ************************************ 00:12:24.186 END TEST nvmf_queue_depth 00:12:24.186 ************************************ 00:12:24.186 07:07:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:24.186 07:07:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:24.186 07:07:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:24.186 07:07:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:24.186 ************************************ 00:12:24.186 START TEST nvmf_target_multipath 00:12:24.186 ************************************ 00:12:24.186 07:07:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:24.186 * Looking for test storage... 00:12:24.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:24.186 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:24.186 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:12:24.186 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:24.186 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:24.186 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:24.186 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:24.186 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:24.186 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:12:24.186 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:12:24.186 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:12:24.186 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:12:24.186 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:12:24.186 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:12:24.186 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:12:24.186 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:24.186 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:12:24.186 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:12:24.186 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:24.186 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:24.186 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:12:24.186 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:12:24.186 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:24.186 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:12:24.186 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:12:24.186 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:12:24.186 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:12:24.186 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:24.186 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:12:24.186 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:12:24.186 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:24.186 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:24.186 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:12:24.186 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:24.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.187 --rc genhtml_branch_coverage=1 00:12:24.187 --rc genhtml_function_coverage=1 00:12:24.187 --rc genhtml_legend=1 00:12:24.187 --rc geninfo_all_blocks=1 00:12:24.187 --rc geninfo_unexecuted_blocks=1 00:12:24.187 00:12:24.187 ' 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:24.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.187 --rc genhtml_branch_coverage=1 00:12:24.187 --rc genhtml_function_coverage=1 00:12:24.187 --rc genhtml_legend=1 00:12:24.187 --rc geninfo_all_blocks=1 00:12:24.187 --rc geninfo_unexecuted_blocks=1 00:12:24.187 00:12:24.187 ' 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:24.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.187 --rc genhtml_branch_coverage=1 00:12:24.187 --rc genhtml_function_coverage=1 00:12:24.187 --rc genhtml_legend=1 00:12:24.187 --rc geninfo_all_blocks=1 00:12:24.187 --rc geninfo_unexecuted_blocks=1 00:12:24.187 00:12:24.187 ' 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:24.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.187 --rc genhtml_branch_coverage=1 00:12:24.187 --rc genhtml_function_coverage=1 00:12:24.187 --rc genhtml_legend=1 00:12:24.187 --rc geninfo_all_blocks=1 00:12:24.187 --rc geninfo_unexecuted_blocks=1 00:12:24.187 00:12:24.187 ' 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:24.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:12:24.187 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:32.327 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:32.327 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:12:32.327 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:32.327 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:32.327 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:32.327 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:32.327 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:32.327 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:12:32.327 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:32.327 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:12:32.327 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:12:32.327 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:12:32.327 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:12:32.327 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:12:32.327 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:12:32.327 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:32.327 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:32.328 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:32.328 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:32.328 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:32.328 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:32.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:32.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:12:32.328 00:12:32.328 --- 10.0.0.2 ping statistics --- 00:12:32.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.328 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:32.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:32.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:12:32.328 00:12:32.328 --- 10.0.0.1 ping statistics --- 00:12:32.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.328 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:12:32.328 only one NIC for nvmf test 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:32.328 rmmod nvme_tcp 00:12:32.328 rmmod nvme_fabrics 00:12:32.328 rmmod nvme_keyring 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:32.328 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.712 07:07:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:33.712 07:07:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:12:33.712 07:07:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:12:33.712 07:07:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:33.712 07:07:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:12:33.712 07:07:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:33.712 07:07:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:12:33.712 07:07:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:33.712 07:07:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:33.712 07:07:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:33.712 07:07:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:12:33.712 07:07:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:12:33.712 07:07:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:33.712 07:07:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:33.712 07:07:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:33.712 07:07:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:33.712 07:07:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:12:33.712 07:07:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:12:33.712 07:07:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:33.712 07:07:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:12:33.712 07:07:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:33.712 07:07:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:33.712 07:07:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.712 07:07:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:33.712 07:07:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.712 07:07:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:33.712 00:12:33.712 real 0m9.910s 00:12:33.712 user 0m2.189s 00:12:33.712 sys 0m5.688s 00:12:33.712 07:07:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:33.712 07:07:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:33.712 ************************************ 00:12:33.712 END TEST nvmf_target_multipath 00:12:33.712 ************************************ 00:12:33.712 07:07:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:33.712 07:07:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:33.712 07:07:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:33.712 07:07:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:33.974 ************************************ 00:12:33.974 START TEST nvmf_zcopy 00:12:33.974 ************************************ 00:12:33.975 07:07:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:33.975 * Looking for test storage... 00:12:33.975 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:33.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.975 --rc genhtml_branch_coverage=1 00:12:33.975 --rc genhtml_function_coverage=1 00:12:33.975 --rc genhtml_legend=1 00:12:33.975 --rc geninfo_all_blocks=1 00:12:33.975 --rc geninfo_unexecuted_blocks=1 00:12:33.975 00:12:33.975 ' 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:33.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.975 --rc genhtml_branch_coverage=1 00:12:33.975 --rc genhtml_function_coverage=1 00:12:33.975 --rc genhtml_legend=1 00:12:33.975 --rc geninfo_all_blocks=1 00:12:33.975 --rc geninfo_unexecuted_blocks=1 00:12:33.975 00:12:33.975 ' 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:33.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.975 --rc genhtml_branch_coverage=1 00:12:33.975 --rc genhtml_function_coverage=1 00:12:33.975 --rc genhtml_legend=1 00:12:33.975 --rc geninfo_all_blocks=1 00:12:33.975 --rc geninfo_unexecuted_blocks=1 00:12:33.975 00:12:33.975 ' 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:33.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.975 --rc genhtml_branch_coverage=1 00:12:33.975 --rc genhtml_function_coverage=1 00:12:33.975 --rc genhtml_legend=1 00:12:33.975 --rc geninfo_all_blocks=1 00:12:33.975 --rc geninfo_unexecuted_blocks=1 00:12:33.975 00:12:33.975 ' 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:33.975 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:33.976 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.976 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.976 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.976 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:33.976 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.976 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:12:33.976 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:33.976 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:33.976 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:33.976 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:33.976 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:33.976 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:33.976 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:33.976 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:33.976 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:33.976 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:33.976 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:33.976 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:33.976 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:33.976 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:33.976 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:33.976 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:33.976 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.976 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:33.976 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.237 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:34.237 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:34.237 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:12:34.237 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:42.384 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:42.384 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:12:42.384 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:42.384 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:42.384 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:42.384 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:42.384 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:42.384 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:12:42.384 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:42.384 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:12:42.384 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:12:42.384 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:12:42.384 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:12:42.384 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:12:42.384 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:12:42.384 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:42.384 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:42.385 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:42.385 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:42.385 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:42.385 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:42.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:42.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.417 ms 00:12:42.385 00:12:42.385 --- 10.0.0.2 ping statistics --- 00:12:42.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.385 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:42.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:42.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:12:42.385 00:12:42.385 --- 10.0.0.1 ping statistics --- 00:12:42.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.385 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2237451 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2237451 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2237451 ']' 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:42.385 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:42.385 [2024-11-27 07:07:52.784558] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:12:42.385 [2024-11-27 07:07:52.784626] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.385 [2024-11-27 07:07:52.885612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.385 [2024-11-27 07:07:52.936201] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:42.385 [2024-11-27 07:07:52.936253] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:42.385 [2024-11-27 07:07:52.936262] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:42.385 [2024-11-27 07:07:52.936275] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:42.385 [2024-11-27 07:07:52.936281] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:42.385 [2024-11-27 07:07:52.937082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:42.646 [2024-11-27 07:07:53.649736] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:42.646 [2024-11-27 07:07:53.674018] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:42.646 malloc0 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:42.646 { 00:12:42.646 "params": { 00:12:42.646 "name": "Nvme$subsystem", 00:12:42.646 "trtype": "$TEST_TRANSPORT", 00:12:42.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:42.646 "adrfam": "ipv4", 00:12:42.646 "trsvcid": "$NVMF_PORT", 00:12:42.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:42.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:42.646 "hdgst": ${hdgst:-false}, 00:12:42.646 "ddgst": ${ddgst:-false} 00:12:42.646 }, 00:12:42.646 "method": "bdev_nvme_attach_controller" 00:12:42.646 } 00:12:42.646 EOF 00:12:42.646 )") 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:12:42.646 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:42.646 "params": { 00:12:42.646 "name": "Nvme1", 00:12:42.646 "trtype": "tcp", 00:12:42.646 "traddr": "10.0.0.2", 00:12:42.646 "adrfam": "ipv4", 00:12:42.646 "trsvcid": "4420", 00:12:42.646 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:42.646 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:42.646 "hdgst": false, 00:12:42.646 "ddgst": false 00:12:42.646 }, 00:12:42.646 "method": "bdev_nvme_attach_controller" 00:12:42.646 }' 00:12:42.646 [2024-11-27 07:07:53.775619] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:12:42.646 [2024-11-27 07:07:53.775691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2237677 ] 00:12:42.906 [2024-11-27 07:07:53.869677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.906 [2024-11-27 07:07:53.923887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.166 Running I/O for 10 seconds... 00:12:45.496 6509.00 IOPS, 50.85 MiB/s [2024-11-27T06:07:57.274Z] 6565.00 IOPS, 51.29 MiB/s [2024-11-27T06:07:58.663Z] 7365.33 IOPS, 57.54 MiB/s [2024-11-27T06:07:59.604Z] 7984.50 IOPS, 62.38 MiB/s [2024-11-27T06:08:00.549Z] 8359.40 IOPS, 65.31 MiB/s [2024-11-27T06:08:01.494Z] 8597.33 IOPS, 67.17 MiB/s [2024-11-27T06:08:02.464Z] 8773.57 IOPS, 68.54 MiB/s [2024-11-27T06:08:03.407Z] 8906.88 IOPS, 69.58 MiB/s [2024-11-27T06:08:04.356Z] 9010.78 IOPS, 70.40 MiB/s [2024-11-27T06:08:04.356Z] 9093.60 IOPS, 71.04 MiB/s 00:12:53.151 Latency(us) 00:12:53.151 [2024-11-27T06:08:04.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.151 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:53.151 Verification LBA range: start 0x0 length 0x1000 00:12:53.151 Nvme1n1 : 10.01 9096.31 71.06 0.00 0.00 14025.81 2088.96 29054.29 00:12:53.151 [2024-11-27T06:08:04.356Z] =================================================================================================================== 00:12:53.151 [2024-11-27T06:08:04.356Z] Total : 9096.31 71.06 0.00 0.00 14025.81 2088.96 29054.29 00:12:53.455 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2239693 00:12:53.455 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:12:53.455 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:53.455 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:53.455 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:53.455 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:12:53.455 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:12:53.455 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:53.455 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:53.455 { 00:12:53.455 "params": { 00:12:53.455 "name": "Nvme$subsystem", 00:12:53.455 "trtype": "$TEST_TRANSPORT", 00:12:53.455 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:53.455 "adrfam": "ipv4", 00:12:53.455 "trsvcid": "$NVMF_PORT", 00:12:53.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:53.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:53.455 "hdgst": ${hdgst:-false}, 00:12:53.455 "ddgst": ${ddgst:-false} 00:12:53.455 }, 00:12:53.455 "method": "bdev_nvme_attach_controller" 00:12:53.455 } 00:12:53.455 EOF 00:12:53.455 )") 00:12:53.455 [2024-11-27 07:08:04.394328] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.455 [2024-11-27 07:08:04.394355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.455 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:12:53.455 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:12:53.455 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:12:53.455 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:53.455 "params": { 00:12:53.455 "name": "Nvme1", 00:12:53.455 "trtype": "tcp", 00:12:53.455 "traddr": "10.0.0.2", 00:12:53.455 "adrfam": "ipv4", 00:12:53.455 "trsvcid": "4420", 00:12:53.455 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:53.455 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:53.455 "hdgst": false, 00:12:53.455 "ddgst": false 00:12:53.455 }, 00:12:53.455 "method": "bdev_nvme_attach_controller" 00:12:53.455 }' 00:12:53.455 [2024-11-27 07:08:04.406329] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.455 [2024-11-27 07:08:04.406339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.455 [2024-11-27 07:08:04.418357] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.455 [2024-11-27 07:08:04.418364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.455 [2024-11-27 07:08:04.430389] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.455 [2024-11-27 07:08:04.430396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.455 [2024-11-27 07:08:04.442421] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.455 [2024-11-27 07:08:04.442428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.455 [2024-11-27 07:08:04.446157] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:12:53.455 [2024-11-27 07:08:04.446210] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2239693 ] 00:12:53.455 [2024-11-27 07:08:04.454451] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.455 [2024-11-27 07:08:04.454459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.455 [2024-11-27 07:08:04.466482] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.455 [2024-11-27 07:08:04.466489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.455 [2024-11-27 07:08:04.478514] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.455 [2024-11-27 07:08:04.478522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.455 [2024-11-27 07:08:04.490544] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.455 [2024-11-27 07:08:04.490551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.455 [2024-11-27 07:08:04.502575] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.455 [2024-11-27 07:08:04.502583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.455 [2024-11-27 07:08:04.514606] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.455 [2024-11-27 07:08:04.514614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.455 [2024-11-27 07:08:04.526637] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.455 [2024-11-27 07:08:04.526645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.455 [2024-11-27 07:08:04.532206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.455 [2024-11-27 07:08:04.538668] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.455 [2024-11-27 07:08:04.538676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.455 [2024-11-27 07:08:04.550698] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.455 [2024-11-27 07:08:04.550708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.455 [2024-11-27 07:08:04.561634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.455 [2024-11-27 07:08:04.562728] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.456 [2024-11-27 07:08:04.562736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.456 [2024-11-27 07:08:04.574763] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.456 [2024-11-27 07:08:04.574772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.456 [2024-11-27 07:08:04.586794] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.456 [2024-11-27 07:08:04.586807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.456 [2024-11-27 07:08:04.598820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.456 [2024-11-27 07:08:04.598831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.456 [2024-11-27 07:08:04.610849] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.456 [2024-11-27 07:08:04.610858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.456 [2024-11-27 07:08:04.622879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.456 [2024-11-27 07:08:04.622886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.456 [2024-11-27 07:08:04.635111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.456 [2024-11-27 07:08:04.635127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.456 [2024-11-27 07:08:04.647133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.456 [2024-11-27 07:08:04.647142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.798 [2024-11-27 07:08:04.659169] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.798 [2024-11-27 07:08:04.659179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.798 [2024-11-27 07:08:04.671198] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.798 [2024-11-27 07:08:04.671207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.798 [2024-11-27 07:08:04.683224] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.798 [2024-11-27 07:08:04.683232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.798 [2024-11-27 07:08:04.695256] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.798 [2024-11-27 07:08:04.695264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.798 [2024-11-27 07:08:04.707287] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.798 [2024-11-27 07:08:04.707295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.798 [2024-11-27 07:08:04.719320] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.798 [2024-11-27 07:08:04.719329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.798 [2024-11-27 07:08:04.731350] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.798 [2024-11-27 07:08:04.731357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.798 [2024-11-27 07:08:04.743383] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.798 [2024-11-27 07:08:04.743390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.798 [2024-11-27 07:08:04.755412] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.798 [2024-11-27 07:08:04.755425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.798 [2024-11-27 07:08:04.767445] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.798 [2024-11-27 07:08:04.767453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.798 [2024-11-27 07:08:04.779476] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.798 [2024-11-27 07:08:04.779483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.798 [2024-11-27 07:08:04.791509] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.798 [2024-11-27 07:08:04.791516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.798 [2024-11-27 07:08:04.803540] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.798 [2024-11-27 07:08:04.803548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.798 [2024-11-27 07:08:04.815579] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.798 [2024-11-27 07:08:04.815594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.798 Running I/O for 5 seconds... 00:12:53.798 [2024-11-27 07:08:04.827606] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.798 [2024-11-27 07:08:04.827615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.798 [2024-11-27 07:08:04.842053] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.798 [2024-11-27 07:08:04.842069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.798 [2024-11-27 07:08:04.855403] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.798 [2024-11-27 07:08:04.855420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.798 [2024-11-27 07:08:04.868969] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.798 [2024-11-27 07:08:04.868985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.798 [2024-11-27 07:08:04.882302] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.798 [2024-11-27 07:08:04.882317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.798 [2024-11-27 07:08:04.895705] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.798 [2024-11-27 07:08:04.895721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.798 [2024-11-27 07:08:04.909067] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.798 [2024-11-27 07:08:04.909082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.798 [2024-11-27 07:08:04.922032] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.798 [2024-11-27 07:08:04.922048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.798 [2024-11-27 07:08:04.935109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.798 [2024-11-27 07:08:04.935125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.798 [2024-11-27 07:08:04.947390] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.798 [2024-11-27 07:08:04.947404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.798 [2024-11-27 07:08:04.960466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.798 [2024-11-27 07:08:04.960481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.798 [2024-11-27 07:08:04.973673] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.798 [2024-11-27 07:08:04.973688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.066 [2024-11-27 07:08:04.986398] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.066 [2024-11-27 07:08:04.986412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.066 [2024-11-27 07:08:04.999808] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.066 [2024-11-27 07:08:04.999824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.066 [2024-11-27 07:08:05.013372] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.066 [2024-11-27 07:08:05.013388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.066 [2024-11-27 07:08:05.026339] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.066 [2024-11-27 07:08:05.026354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.066 [2024-11-27 07:08:05.039737] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.066 [2024-11-27 07:08:05.039753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.066 [2024-11-27 07:08:05.052175] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.066 [2024-11-27 07:08:05.052190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.066 [2024-11-27 07:08:05.065190] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.066 [2024-11-27 07:08:05.065205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.066 [2024-11-27 07:08:05.077876] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.066 [2024-11-27 07:08:05.077891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.066 [2024-11-27 07:08:05.090240] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.066 [2024-11-27 07:08:05.090256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.066 [2024-11-27 07:08:05.102781] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.066 [2024-11-27 07:08:05.102796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.066 [2024-11-27 07:08:05.115904] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.066 [2024-11-27 07:08:05.115919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.066 [2024-11-27 07:08:05.128574] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.066 [2024-11-27 07:08:05.128589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.066 [2024-11-27 07:08:05.141454] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.066 [2024-11-27 07:08:05.141469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.066 [2024-11-27 07:08:05.154908] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.066 [2024-11-27 07:08:05.154923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.066 [2024-11-27 07:08:05.167938] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.066 [2024-11-27 07:08:05.167953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.066 [2024-11-27 07:08:05.180356] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.066 [2024-11-27 07:08:05.180371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.066 [2024-11-27 07:08:05.193050] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.066 [2024-11-27 07:08:05.193064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.066 [2024-11-27 07:08:05.206530] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.066 [2024-11-27 07:08:05.206545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.066 [2024-11-27 07:08:05.219736] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.067 [2024-11-27 07:08:05.219750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.067 [2024-11-27 07:08:05.232520] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.067 [2024-11-27 07:08:05.232535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.067 [2024-11-27 07:08:05.245650] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.067 [2024-11-27 07:08:05.245665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.067 [2024-11-27 07:08:05.259144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.067 [2024-11-27 07:08:05.259163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.328 [2024-11-27 07:08:05.272452] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.328 [2024-11-27 07:08:05.272467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.328 [2024-11-27 07:08:05.284992] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.328 [2024-11-27 07:08:05.285007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.328 [2024-11-27 07:08:05.298256] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.328 [2024-11-27 07:08:05.298270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.328 [2024-11-27 07:08:05.311585] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.328 [2024-11-27 07:08:05.311600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.328 [2024-11-27 07:08:05.325087] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.328 [2024-11-27 07:08:05.325101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.328 [2024-11-27 07:08:05.337686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.328 [2024-11-27 07:08:05.337701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.328 [2024-11-27 07:08:05.350264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.328 [2024-11-27 07:08:05.350278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.328 [2024-11-27 07:08:05.363718] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.328 [2024-11-27 07:08:05.363733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.328 [2024-11-27 07:08:05.376999] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.328 [2024-11-27 07:08:05.377014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.328 [2024-11-27 07:08:05.390433] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.328 [2024-11-27 07:08:05.390448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.328 [2024-11-27 07:08:05.403137] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.328 [2024-11-27 07:08:05.403151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.328 [2024-11-27 07:08:05.416351] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.328 [2024-11-27 07:08:05.416366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.328 [2024-11-27 07:08:05.429874] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.328 [2024-11-27 07:08:05.429889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.328 [2024-11-27 07:08:05.443189] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.328 [2024-11-27 07:08:05.443203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.328 [2024-11-27 07:08:05.456457] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.328 [2024-11-27 07:08:05.456471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.328 [2024-11-27 07:08:05.469746] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.328 [2024-11-27 07:08:05.469761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.328 [2024-11-27 07:08:05.482817] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.328 [2024-11-27 07:08:05.482831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.328 [2024-11-27 07:08:05.495491] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.328 [2024-11-27 07:08:05.495505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.328 [2024-11-27 07:08:05.508844] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.328 [2024-11-27 07:08:05.508858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.328 [2024-11-27 07:08:05.522174] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.328 [2024-11-27 07:08:05.522188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.588 [2024-11-27 07:08:05.534423] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.588 [2024-11-27 07:08:05.534438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.588 [2024-11-27 07:08:05.547738] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.588 [2024-11-27 07:08:05.547753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.588 [2024-11-27 07:08:05.560944] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.588 [2024-11-27 07:08:05.560958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.588 [2024-11-27 07:08:05.573950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.588 [2024-11-27 07:08:05.573964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.588 [2024-11-27 07:08:05.587093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.588 [2024-11-27 07:08:05.587108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.588 [2024-11-27 07:08:05.600180] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.588 [2024-11-27 07:08:05.600194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.588 [2024-11-27 07:08:05.613220] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.588 [2024-11-27 07:08:05.613234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.588 [2024-11-27 07:08:05.626644] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.588 [2024-11-27 07:08:05.626659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.588 [2024-11-27 07:08:05.640233] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.588 [2024-11-27 07:08:05.640248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.588 [2024-11-27 07:08:05.653380] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.588 [2024-11-27 07:08:05.653395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.588 [2024-11-27 07:08:05.666394] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.588 [2024-11-27 07:08:05.666408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.588 [2024-11-27 07:08:05.679617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.588 [2024-11-27 07:08:05.679632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.588 [2024-11-27 07:08:05.692304] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.588 [2024-11-27 07:08:05.692319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.588 [2024-11-27 07:08:05.705537] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.588 [2024-11-27 07:08:05.705552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.588 [2024-11-27 07:08:05.718717] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.588 [2024-11-27 07:08:05.718732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.588 [2024-11-27 07:08:05.731888] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.588 [2024-11-27 07:08:05.731910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.588 [2024-11-27 07:08:05.744843] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.588 [2024-11-27 07:08:05.744857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.588 [2024-11-27 07:08:05.758137] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.588 [2024-11-27 07:08:05.758152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.588 [2024-11-27 07:08:05.771380] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.588 [2024-11-27 07:08:05.771394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.588 [2024-11-27 07:08:05.784361] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.588 [2024-11-27 07:08:05.784375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.849 [2024-11-27 07:08:05.797805] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.849 [2024-11-27 07:08:05.797820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.849 [2024-11-27 07:08:05.811150] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.849 [2024-11-27 07:08:05.811168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.849 [2024-11-27 07:08:05.824304] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.849 [2024-11-27 07:08:05.824318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.849 19222.00 IOPS, 150.17 MiB/s [2024-11-27T06:08:06.054Z] [2024-11-27 07:08:05.837911] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.849 [2024-11-27 07:08:05.837925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.849 [2024-11-27 07:08:05.851071] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.849 [2024-11-27 07:08:05.851086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.849 [2024-11-27 07:08:05.864423] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.849 [2024-11-27 07:08:05.864437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.849 [2024-11-27 07:08:05.878034] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.849 [2024-11-27 07:08:05.878049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.849 [2024-11-27 07:08:05.890803] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.849 [2024-11-27 07:08:05.890817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.849 [2024-11-27 07:08:05.903941] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.849 [2024-11-27 07:08:05.903955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.849 [2024-11-27 07:08:05.917259] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.849 [2024-11-27 07:08:05.917274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.849 [2024-11-27 07:08:05.930550] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.849 [2024-11-27 07:08:05.930565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.849 [2024-11-27 07:08:05.944124] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.849 [2024-11-27 07:08:05.944138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.849 [2024-11-27 07:08:05.957311] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.849 [2024-11-27 07:08:05.957325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.849 [2024-11-27 07:08:05.970679] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.849 [2024-11-27 07:08:05.970693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.849 [2024-11-27 07:08:05.983955] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.849 [2024-11-27 07:08:05.983974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.849 [2024-11-27 07:08:05.996733] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.849 [2024-11-27 07:08:05.996748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.849 [2024-11-27 07:08:06.010312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.849 [2024-11-27 07:08:06.010326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.849 [2024-11-27 07:08:06.023792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.849 [2024-11-27 07:08:06.023807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.849 [2024-11-27 07:08:06.037315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.849 [2024-11-27 07:08:06.037330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.849 [2024-11-27 07:08:06.050879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.849 [2024-11-27 07:08:06.050893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.110 [2024-11-27 07:08:06.064111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.110 [2024-11-27 07:08:06.064127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.110 [2024-11-27 07:08:06.077772] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.110 [2024-11-27 07:08:06.077787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.110 [2024-11-27 07:08:06.090869] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.110 [2024-11-27 07:08:06.090884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.110 [2024-11-27 07:08:06.103933] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.110 [2024-11-27 07:08:06.103947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.110 [2024-11-27 07:08:06.117122] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.110 [2024-11-27 07:08:06.117136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.110 [2024-11-27 07:08:06.130394] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.110 [2024-11-27 07:08:06.130408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.110 [2024-11-27 07:08:06.143670] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.110 [2024-11-27 07:08:06.143684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.110 [2024-11-27 07:08:06.157056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.110 [2024-11-27 07:08:06.157071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.110 [2024-11-27 07:08:06.169684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.110 [2024-11-27 07:08:06.169699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.110 [2024-11-27 07:08:06.182707] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.110 [2024-11-27 07:08:06.182722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.110 [2024-11-27 07:08:06.196195] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.110 [2024-11-27 07:08:06.196209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.110 [2024-11-27 07:08:06.209424] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.110 [2024-11-27 07:08:06.209438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.110 [2024-11-27 07:08:06.222296] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.110 [2024-11-27 07:08:06.222311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.110 [2024-11-27 07:08:06.235953] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.110 [2024-11-27 07:08:06.235972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.110 [2024-11-27 07:08:06.249390] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.110 [2024-11-27 07:08:06.249406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.110 [2024-11-27 07:08:06.262844] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.110 [2024-11-27 07:08:06.262859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.110 [2024-11-27 07:08:06.276238] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.110 [2024-11-27 07:08:06.276253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.110 [2024-11-27 07:08:06.289810] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.110 [2024-11-27 07:08:06.289825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.110 [2024-11-27 07:08:06.303448] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.110 [2024-11-27 07:08:06.303463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.373 [2024-11-27 07:08:06.316579] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.373 [2024-11-27 07:08:06.316595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.373 [2024-11-27 07:08:06.330064] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.373 [2024-11-27 07:08:06.330079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.373 [2024-11-27 07:08:06.343477] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.373 [2024-11-27 07:08:06.343492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.373 [2024-11-27 07:08:06.356083] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.373 [2024-11-27 07:08:06.356098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.373 [2024-11-27 07:08:06.368778] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.373 [2024-11-27 07:08:06.368793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.373 [2024-11-27 07:08:06.381956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.373 [2024-11-27 07:08:06.381972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.373 [2024-11-27 07:08:06.394714] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.373 [2024-11-27 07:08:06.394730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.373 [2024-11-27 07:08:06.408300] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.373 [2024-11-27 07:08:06.408316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.373 [2024-11-27 07:08:06.421743] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.373 [2024-11-27 07:08:06.421758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.373 [2024-11-27 07:08:06.434905] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.373 [2024-11-27 07:08:06.434920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.373 [2024-11-27 07:08:06.447314] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.373 [2024-11-27 07:08:06.447329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.373 [2024-11-27 07:08:06.460466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.373 [2024-11-27 07:08:06.460481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.373 [2024-11-27 07:08:06.473939] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.373 [2024-11-27 07:08:06.473954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.373 [2024-11-27 07:08:06.487225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.373 [2024-11-27 07:08:06.487240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.373 [2024-11-27 07:08:06.500420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.373 [2024-11-27 07:08:06.500435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.373 [2024-11-27 07:08:06.513380] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.373 [2024-11-27 07:08:06.513395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.373 [2024-11-27 07:08:06.525903] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.373 [2024-11-27 07:08:06.525918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.373 [2024-11-27 07:08:06.538731] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.373 [2024-11-27 07:08:06.538746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.373 [2024-11-27 07:08:06.551542] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.373 [2024-11-27 07:08:06.551557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.373 [2024-11-27 07:08:06.565230] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.373 [2024-11-27 07:08:06.565245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.634 [2024-11-27 07:08:06.578517] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.634 [2024-11-27 07:08:06.578533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.634 [2024-11-27 07:08:06.591857] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.634 [2024-11-27 07:08:06.591871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.634 [2024-11-27 07:08:06.605432] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.634 [2024-11-27 07:08:06.605447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.634 [2024-11-27 07:08:06.618765] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.634 [2024-11-27 07:08:06.618780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.634 [2024-11-27 07:08:06.632041] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.634 [2024-11-27 07:08:06.632056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.634 [2024-11-27 07:08:06.645213] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.634 [2024-11-27 07:08:06.645228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.634 [2024-11-27 07:08:06.658577] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.634 [2024-11-27 07:08:06.658592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.634 [2024-11-27 07:08:06.670989] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.634 [2024-11-27 07:08:06.671004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.634 [2024-11-27 07:08:06.684009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.634 [2024-11-27 07:08:06.684025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.634 [2024-11-27 07:08:06.697443] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.634 [2024-11-27 07:08:06.697458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.634 [2024-11-27 07:08:06.710743] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.634 [2024-11-27 07:08:06.710758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.634 [2024-11-27 07:08:06.723857] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.634 [2024-11-27 07:08:06.723872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.634 [2024-11-27 07:08:06.737256] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.634 [2024-11-27 07:08:06.737271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.634 [2024-11-27 07:08:06.749785] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.634 [2024-11-27 07:08:06.749800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.634 [2024-11-27 07:08:06.763405] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.634 [2024-11-27 07:08:06.763419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.634 [2024-11-27 07:08:06.776345] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.634 [2024-11-27 07:08:06.776360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.634 [2024-11-27 07:08:06.789010] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.634 [2024-11-27 07:08:06.789025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.634 [2024-11-27 07:08:06.801591] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.634 [2024-11-27 07:08:06.801606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.634 [2024-11-27 07:08:06.815166] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.634 [2024-11-27 07:08:06.815181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.634 [2024-11-27 07:08:06.828515] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.634 [2024-11-27 07:08:06.828530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.895 19355.50 IOPS, 151.21 MiB/s [2024-11-27T06:08:07.100Z] [2024-11-27 07:08:06.841955] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.895 [2024-11-27 07:08:06.841971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.895 [2024-11-27 07:08:06.855103] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.895 [2024-11-27 07:08:06.855118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.895 [2024-11-27 07:08:06.868701] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.895 [2024-11-27 07:08:06.868716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.895 [2024-11-27 07:08:06.882032] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.895 [2024-11-27 07:08:06.882046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.895 [2024-11-27 07:08:06.895189] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.895 [2024-11-27 07:08:06.895204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.895 [2024-11-27 07:08:06.907787] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.895 [2024-11-27 07:08:06.907801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.895 [2024-11-27 07:08:06.921232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.895 [2024-11-27 07:08:06.921246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.895 [2024-11-27 07:08:06.934519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.895 [2024-11-27 07:08:06.934534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.895 [2024-11-27 07:08:06.947752] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.895 [2024-11-27 07:08:06.947767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.895 [2024-11-27 07:08:06.960940] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.895 [2024-11-27 07:08:06.960955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.895 [2024-11-27 07:08:06.974351] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.895 [2024-11-27 07:08:06.974366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.895 [2024-11-27 07:08:06.986861] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.895 [2024-11-27 07:08:06.986876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.895 [2024-11-27 07:08:07.000246] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.895 [2024-11-27 07:08:07.000261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.895 [2024-11-27 07:08:07.012542] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.895 [2024-11-27 07:08:07.012556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.895 [2024-11-27 07:08:07.025334] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.895 [2024-11-27 07:08:07.025349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.895 [2024-11-27 07:08:07.038580] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.895 [2024-11-27 07:08:07.038595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.895 [2024-11-27 07:08:07.051594] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.895 [2024-11-27 07:08:07.051609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.895 [2024-11-27 07:08:07.064980] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.895 [2024-11-27 07:08:07.064995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.895 [2024-11-27 07:08:07.078310] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.895 [2024-11-27 07:08:07.078325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.895 [2024-11-27 07:08:07.091049] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.895 [2024-11-27 07:08:07.091064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.155 [2024-11-27 07:08:07.103936] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.155 [2024-11-27 07:08:07.103951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.155 [2024-11-27 07:08:07.117003] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.155 [2024-11-27 07:08:07.117018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.155 [2024-11-27 07:08:07.130163] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.155 [2024-11-27 07:08:07.130178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.155 [2024-11-27 07:08:07.143581] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.155 [2024-11-27 07:08:07.143596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.155 [2024-11-27 07:08:07.157144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.155 [2024-11-27 07:08:07.157162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.155 [2024-11-27 07:08:07.169791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.155 [2024-11-27 07:08:07.169806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.155 [2024-11-27 07:08:07.183233] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.155 [2024-11-27 07:08:07.183247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.155 [2024-11-27 07:08:07.196366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.155 [2024-11-27 07:08:07.196381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.155 [2024-11-27 07:08:07.209539] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.155 [2024-11-27 07:08:07.209554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.155 [2024-11-27 07:08:07.223236] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.155 [2024-11-27 07:08:07.223254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.155 [2024-11-27 07:08:07.235824] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.155 [2024-11-27 07:08:07.235839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.155 [2024-11-27 07:08:07.249084] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.155 [2024-11-27 07:08:07.249099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.155 [2024-11-27 07:08:07.262565] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.155 [2024-11-27 07:08:07.262580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.155 [2024-11-27 07:08:07.275409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.155 [2024-11-27 07:08:07.275424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.155 [2024-11-27 07:08:07.288547] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.155 [2024-11-27 07:08:07.288562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.155 [2024-11-27 07:08:07.301917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.155 [2024-11-27 07:08:07.301932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.155 [2024-11-27 07:08:07.315091] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.155 [2024-11-27 07:08:07.315106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.155 [2024-11-27 07:08:07.328174] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.155 [2024-11-27 07:08:07.328189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.155 [2024-11-27 07:08:07.341712] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.155 [2024-11-27 07:08:07.341727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.156 [2024-11-27 07:08:07.355096] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.156 [2024-11-27 07:08:07.355111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.416 [2024-11-27 07:08:07.367849] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.416 [2024-11-27 07:08:07.367864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.416 [2024-11-27 07:08:07.381315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.416 [2024-11-27 07:08:07.381330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.416 [2024-11-27 07:08:07.393726] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.416 [2024-11-27 07:08:07.393740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.416 [2024-11-27 07:08:07.406983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.416 [2024-11-27 07:08:07.406997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.416 [2024-11-27 07:08:07.420476] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.416 [2024-11-27 07:08:07.420491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.416 [2024-11-27 07:08:07.433684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.416 [2024-11-27 07:08:07.433699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.416 [2024-11-27 07:08:07.447167] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.416 [2024-11-27 07:08:07.447181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.416 [2024-11-27 07:08:07.460270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.416 [2024-11-27 07:08:07.460285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.416 [2024-11-27 07:08:07.473828] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.416 [2024-11-27 07:08:07.473847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.416 [2024-11-27 07:08:07.487328] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.416 [2024-11-27 07:08:07.487343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.416 [2024-11-27 07:08:07.499973] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.416 [2024-11-27 07:08:07.499988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.416 [2024-11-27 07:08:07.512437] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.416 [2024-11-27 07:08:07.512452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.416 [2024-11-27 07:08:07.524873] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.416 [2024-11-27 07:08:07.524888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.416 [2024-11-27 07:08:07.538397] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.416 [2024-11-27 07:08:07.538412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.416 [2024-11-27 07:08:07.551921] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.416 [2024-11-27 07:08:07.551936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.416 [2024-11-27 07:08:07.564773] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.416 [2024-11-27 07:08:07.564787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.416 [2024-11-27 07:08:07.578243] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.416 [2024-11-27 07:08:07.578257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.416 [2024-11-27 07:08:07.591373] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.416 [2024-11-27 07:08:07.591388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.416 [2024-11-27 07:08:07.604873] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.416 [2024-11-27 07:08:07.604888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.416 [2024-11-27 07:08:07.618214] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.416 [2024-11-27 07:08:07.618229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.677 [2024-11-27 07:08:07.630781] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.677 [2024-11-27 07:08:07.630796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.677 [2024-11-27 07:08:07.643814] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.677 [2024-11-27 07:08:07.643829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.677 [2024-11-27 07:08:07.657241] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.677 [2024-11-27 07:08:07.657256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.677 [2024-11-27 07:08:07.669738] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.677 [2024-11-27 07:08:07.669752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.677 [2024-11-27 07:08:07.683217] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.677 [2024-11-27 07:08:07.683232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.677 [2024-11-27 07:08:07.696240] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.677 [2024-11-27 07:08:07.696255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.677 [2024-11-27 07:08:07.709531] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.677 [2024-11-27 07:08:07.709546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.677 [2024-11-27 07:08:07.723233] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.677 [2024-11-27 07:08:07.723252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.677 [2024-11-27 07:08:07.736473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.677 [2024-11-27 07:08:07.736488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.677 [2024-11-27 07:08:07.750001] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.677 [2024-11-27 07:08:07.750015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.677 [2024-11-27 07:08:07.763273] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.677 [2024-11-27 07:08:07.763288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.677 [2024-11-27 07:08:07.775945] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.677 [2024-11-27 07:08:07.775960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.677 [2024-11-27 07:08:07.788990] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.677 [2024-11-27 07:08:07.789005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.677 [2024-11-27 07:08:07.802595] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.677 [2024-11-27 07:08:07.802610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.677 [2024-11-27 07:08:07.815710] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.677 [2024-11-27 07:08:07.815724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.678 [2024-11-27 07:08:07.829122] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.678 [2024-11-27 07:08:07.829137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.678 19371.00 IOPS, 151.34 MiB/s [2024-11-27T06:08:07.883Z] [2024-11-27 07:08:07.841622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.678 [2024-11-27 07:08:07.841637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.678 [2024-11-27 07:08:07.854268] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.678 [2024-11-27 07:08:07.854283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.678 [2024-11-27 07:08:07.866979] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.678 [2024-11-27 07:08:07.866994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.678 [2024-11-27 07:08:07.879428] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.678 [2024-11-27 07:08:07.879443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.938 [2024-11-27 07:08:07.892968] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.938 [2024-11-27 07:08:07.892983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.938 [2024-11-27 07:08:07.905606] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.938 [2024-11-27 07:08:07.905622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.938 [2024-11-27 07:08:07.918386] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.938 [2024-11-27 07:08:07.918401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.938 [2024-11-27 07:08:07.931550] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.938 [2024-11-27 07:08:07.931565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.938 [2024-11-27 07:08:07.943969] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.938 [2024-11-27 07:08:07.943984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.938 [2024-11-27 07:08:07.957137] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.938 [2024-11-27 07:08:07.957154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.938 [2024-11-27 07:08:07.969587] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.938 [2024-11-27 07:08:07.969602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.938 [2024-11-27 07:08:07.982898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.938 [2024-11-27 07:08:07.982913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.938 [2024-11-27 07:08:07.996357] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.938 [2024-11-27 07:08:07.996372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.938 [2024-11-27 07:08:08.009269] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.938 [2024-11-27 07:08:08.009284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.938 [2024-11-27 07:08:08.021922] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.938 [2024-11-27 07:08:08.021937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.938 [2024-11-27 07:08:08.035330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.938 [2024-11-27 07:08:08.035345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.938 [2024-11-27 07:08:08.047898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.938 [2024-11-27 07:08:08.047913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.938 [2024-11-27 07:08:08.061003] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.938 [2024-11-27 07:08:08.061019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.938 [2024-11-27 07:08:08.074102] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.938 [2024-11-27 07:08:08.074117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.938 [2024-11-27 07:08:08.086378] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.938 [2024-11-27 07:08:08.086394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.938 [2024-11-27 07:08:08.099549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.938 [2024-11-27 07:08:08.099564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.938 [2024-11-27 07:08:08.112829] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.938 [2024-11-27 07:08:08.112844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.938 [2024-11-27 07:08:08.126135] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.938 [2024-11-27 07:08:08.126150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.938 [2024-11-27 07:08:08.139251] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.939 [2024-11-27 07:08:08.139266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.199 [2024-11-27 07:08:08.152116] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.199 [2024-11-27 07:08:08.152131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.199 [2024-11-27 07:08:08.165476] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.199 [2024-11-27 07:08:08.165491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.199 [2024-11-27 07:08:08.179003] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.199 [2024-11-27 07:08:08.179018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.199 [2024-11-27 07:08:08.192380] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.199 [2024-11-27 07:08:08.192395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.199 [2024-11-27 07:08:08.205782] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.199 [2024-11-27 07:08:08.205797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.199 [2024-11-27 07:08:08.219133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.199 [2024-11-27 07:08:08.219149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.199 [2024-11-27 07:08:08.232480] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.199 [2024-11-27 07:08:08.232495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.199 [2024-11-27 07:08:08.245629] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.199 [2024-11-27 07:08:08.245643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.199 [2024-11-27 07:08:08.258586] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.199 [2024-11-27 07:08:08.258601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.199 [2024-11-27 07:08:08.271874] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.200 [2024-11-27 07:08:08.271888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.200 [2024-11-27 07:08:08.285004] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.200 [2024-11-27 07:08:08.285018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.200 [2024-11-27 07:08:08.298563] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.200 [2024-11-27 07:08:08.298578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.200 [2024-11-27 07:08:08.311982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.200 [2024-11-27 07:08:08.311997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.200 [2024-11-27 07:08:08.325505] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.200 [2024-11-27 07:08:08.325520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.200 [2024-11-27 07:08:08.338745] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.200 [2024-11-27 07:08:08.338760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.200 [2024-11-27 07:08:08.351918] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.200 [2024-11-27 07:08:08.351933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.200 [2024-11-27 07:08:08.365364] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.200 [2024-11-27 07:08:08.365379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.200 [2024-11-27 07:08:08.378954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.200 [2024-11-27 07:08:08.378969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.200 [2024-11-27 07:08:08.392800] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.200 [2024-11-27 07:08:08.392814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.460 [2024-11-27 07:08:08.405246] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.460 [2024-11-27 07:08:08.405261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.460 [2024-11-27 07:08:08.418423] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.460 [2024-11-27 07:08:08.418438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.460 [2024-11-27 07:08:08.431012] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.460 [2024-11-27 07:08:08.431027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.460 [2024-11-27 07:08:08.443428] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.460 [2024-11-27 07:08:08.443443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.460 [2024-11-27 07:08:08.456822] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.460 [2024-11-27 07:08:08.456837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.460 [2024-11-27 07:08:08.469962] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.460 [2024-11-27 07:08:08.469977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.460 [2024-11-27 07:08:08.483290] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.460 [2024-11-27 07:08:08.483305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.460 [2024-11-27 07:08:08.496771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.460 [2024-11-27 07:08:08.496786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.460 [2024-11-27 07:08:08.510240] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.460 [2024-11-27 07:08:08.510255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.460 [2024-11-27 07:08:08.523156] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.460 [2024-11-27 07:08:08.523176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.460 [2024-11-27 07:08:08.536559] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.460 [2024-11-27 07:08:08.536574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.460 [2024-11-27 07:08:08.549907] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.460 [2024-11-27 07:08:08.549922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.460 [2024-11-27 07:08:08.563021] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.460 [2024-11-27 07:08:08.563035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.460 [2024-11-27 07:08:08.576309] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.460 [2024-11-27 07:08:08.576323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.460 [2024-11-27 07:08:08.589397] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.460 [2024-11-27 07:08:08.589411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.460 [2024-11-27 07:08:08.602909] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.460 [2024-11-27 07:08:08.602923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.460 [2024-11-27 07:08:08.615830] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.460 [2024-11-27 07:08:08.615844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.460 [2024-11-27 07:08:08.628622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.460 [2024-11-27 07:08:08.628636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.460 [2024-11-27 07:08:08.641146] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.460 [2024-11-27 07:08:08.641163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.460 [2024-11-27 07:08:08.654105] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.460 [2024-11-27 07:08:08.654119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.721 [2024-11-27 07:08:08.666863] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.721 [2024-11-27 07:08:08.666879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.721 [2024-11-27 07:08:08.680478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.721 [2024-11-27 07:08:08.680492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.721 [2024-11-27 07:08:08.693232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.721 [2024-11-27 07:08:08.693247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.721 [2024-11-27 07:08:08.705645] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.721 [2024-11-27 07:08:08.705663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.721 [2024-11-27 07:08:08.718299] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.721 [2024-11-27 07:08:08.718313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.721 [2024-11-27 07:08:08.731549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.721 [2024-11-27 07:08:08.731565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.721 [2024-11-27 07:08:08.744882] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.721 [2024-11-27 07:08:08.744896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.721 [2024-11-27 07:08:08.757775] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.721 [2024-11-27 07:08:08.757789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.721 [2024-11-27 07:08:08.770837] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.721 [2024-11-27 07:08:08.770851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.721 [2024-11-27 07:08:08.784295] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.721 [2024-11-27 07:08:08.784309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.721 [2024-11-27 07:08:08.796901] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.721 [2024-11-27 07:08:08.796915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.721 [2024-11-27 07:08:08.810219] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.721 [2024-11-27 07:08:08.810234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.721 [2024-11-27 07:08:08.823614] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.721 [2024-11-27 07:08:08.823628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.721 19385.75 IOPS, 151.45 MiB/s [2024-11-27T06:08:08.926Z] [2024-11-27 07:08:08.836237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.721 [2024-11-27 07:08:08.836252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.721 [2024-11-27 07:08:08.848940] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.721 [2024-11-27 07:08:08.848954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.721 [2024-11-27 07:08:08.862223] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.721 [2024-11-27 07:08:08.862238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.721 [2024-11-27 07:08:08.875010] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.721 [2024-11-27 07:08:08.875025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.721 [2024-11-27 07:08:08.888079] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.721 [2024-11-27 07:08:08.888093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.721 [2024-11-27 07:08:08.900427] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.721 [2024-11-27 07:08:08.900442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.721 [2024-11-27 07:08:08.912920] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.721 [2024-11-27 07:08:08.912935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.981 [2024-11-27 07:08:08.925639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.981 [2024-11-27 07:08:08.925654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.981 [2024-11-27 07:08:08.938818] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.981 [2024-11-27 07:08:08.938833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.981 [2024-11-27 07:08:08.952023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.981 [2024-11-27 07:08:08.952042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.981 [2024-11-27 07:08:08.965559] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.981 [2024-11-27 07:08:08.965574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.981 [2024-11-27 07:08:08.978576] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.981 [2024-11-27 07:08:08.978590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.981 [2024-11-27 07:08:08.991524] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.981 [2024-11-27 07:08:08.991539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.981 [2024-11-27 07:08:09.004851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.981 [2024-11-27 07:08:09.004866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.981 [2024-11-27 07:08:09.018064] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.981 [2024-11-27 07:08:09.018078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.981 [2024-11-27 07:08:09.030504] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.981 [2024-11-27 07:08:09.030518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.981 [2024-11-27 07:08:09.044193] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.981 [2024-11-27 07:08:09.044207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.981 [2024-11-27 07:08:09.057121] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.981 [2024-11-27 07:08:09.057135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.981 [2024-11-27 07:08:09.070789] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.981 [2024-11-27 07:08:09.070803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.981 [2024-11-27 07:08:09.083794] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.981 [2024-11-27 07:08:09.083808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.981 [2024-11-27 07:08:09.096391] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.981 [2024-11-27 07:08:09.096405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.981 [2024-11-27 07:08:09.108834] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.981 [2024-11-27 07:08:09.108848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.981 [2024-11-27 07:08:09.121217] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.981 [2024-11-27 07:08:09.121239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.981 [2024-11-27 07:08:09.134162] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.981 [2024-11-27 07:08:09.134177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.981 [2024-11-27 07:08:09.147782] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.981 [2024-11-27 07:08:09.147797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.981 [2024-11-27 07:08:09.160742] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.981 [2024-11-27 07:08:09.160757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.981 [2024-11-27 07:08:09.173816] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.981 [2024-11-27 07:08:09.173831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.241 [2024-11-27 07:08:09.187071] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.241 [2024-11-27 07:08:09.187086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.241 [2024-11-27 07:08:09.199968] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.241 [2024-11-27 07:08:09.199987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.241 [2024-11-27 07:08:09.212885] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.241 [2024-11-27 07:08:09.212900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.241 [2024-11-27 07:08:09.226335] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.241 [2024-11-27 07:08:09.226350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.241 [2024-11-27 07:08:09.239440] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.241 [2024-11-27 07:08:09.239455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.241 [2024-11-27 07:08:09.252236] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.241 [2024-11-27 07:08:09.252250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.241 [2024-11-27 07:08:09.265093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.241 [2024-11-27 07:08:09.265107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.241 [2024-11-27 07:08:09.277378] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.241 [2024-11-27 07:08:09.277392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.241 [2024-11-27 07:08:09.290218] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.241 [2024-11-27 07:08:09.290233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.241 [2024-11-27 07:08:09.303907] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.241 [2024-11-27 07:08:09.303922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.241 [2024-11-27 07:08:09.317023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.241 [2024-11-27 07:08:09.317037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.241 [2024-11-27 07:08:09.329958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.241 [2024-11-27 07:08:09.329972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.241 [2024-11-27 07:08:09.343545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.241 [2024-11-27 07:08:09.343559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.241 [2024-11-27 07:08:09.356024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.241 [2024-11-27 07:08:09.356038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.241 [2024-11-27 07:08:09.369465] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.241 [2024-11-27 07:08:09.369479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.241 [2024-11-27 07:08:09.382976] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.241 [2024-11-27 07:08:09.382991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.241 [2024-11-27 07:08:09.395803] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.241 [2024-11-27 07:08:09.395818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.241 [2024-11-27 07:08:09.409065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.241 [2024-11-27 07:08:09.409080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.241 [2024-11-27 07:08:09.422229] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.241 [2024-11-27 07:08:09.422243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.241 [2024-11-27 07:08:09.434974] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.241 [2024-11-27 07:08:09.434989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.502 [2024-11-27 07:08:09.447992] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.502 [2024-11-27 07:08:09.448014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.502 [2024-11-27 07:08:09.461218] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.502 [2024-11-27 07:08:09.461233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.502 [2024-11-27 07:08:09.474642] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.502 [2024-11-27 07:08:09.474657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.502 [2024-11-27 07:08:09.488021] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.502 [2024-11-27 07:08:09.488036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.502 [2024-11-27 07:08:09.501545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.502 [2024-11-27 07:08:09.501559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.502 [2024-11-27 07:08:09.514962] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.502 [2024-11-27 07:08:09.514976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.502 [2024-11-27 07:08:09.527904] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.502 [2024-11-27 07:08:09.527919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.502 [2024-11-27 07:08:09.541171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.502 [2024-11-27 07:08:09.541185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.502 [2024-11-27 07:08:09.554919] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.502 [2024-11-27 07:08:09.554934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.502 [2024-11-27 07:08:09.568481] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.502 [2024-11-27 07:08:09.568496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.502 [2024-11-27 07:08:09.582101] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.502 [2024-11-27 07:08:09.582116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.502 [2024-11-27 07:08:09.594990] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.502 [2024-11-27 07:08:09.595005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.502 [2024-11-27 07:08:09.608369] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.502 [2024-11-27 07:08:09.608384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.502 [2024-11-27 07:08:09.621465] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.502 [2024-11-27 07:08:09.621480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.502 [2024-11-27 07:08:09.634779] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.502 [2024-11-27 07:08:09.634794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.502 [2024-11-27 07:08:09.647977] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.502 [2024-11-27 07:08:09.647993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.502 [2024-11-27 07:08:09.660637] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.502 [2024-11-27 07:08:09.660652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.502 [2024-11-27 07:08:09.673366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.502 [2024-11-27 07:08:09.673380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.502 [2024-11-27 07:08:09.686806] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.502 [2024-11-27 07:08:09.686821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.502 [2024-11-27 07:08:09.700202] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.502 [2024-11-27 07:08:09.700218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.763 [2024-11-27 07:08:09.713383] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.763 [2024-11-27 07:08:09.713399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.763 [2024-11-27 07:08:09.726130] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.763 [2024-11-27 07:08:09.726145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.763 [2024-11-27 07:08:09.738204] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.763 [2024-11-27 07:08:09.738218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.763 [2024-11-27 07:08:09.751685] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.763 [2024-11-27 07:08:09.751700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.763 [2024-11-27 07:08:09.764943] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.763 [2024-11-27 07:08:09.764957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.763 [2024-11-27 07:08:09.778062] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.763 [2024-11-27 07:08:09.778077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.764 [2024-11-27 07:08:09.790883] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.764 [2024-11-27 07:08:09.790898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.764 [2024-11-27 07:08:09.804286] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.764 [2024-11-27 07:08:09.804301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.764 [2024-11-27 07:08:09.817129] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.764 [2024-11-27 07:08:09.817144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.764 [2024-11-27 07:08:09.829730] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.764 [2024-11-27 07:08:09.829745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.764 19390.00 IOPS, 151.48 MiB/s [2024-11-27T06:08:09.969Z] [2024-11-27 07:08:09.841647] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.764 [2024-11-27 07:08:09.841662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.764 00:12:58.764 Latency(us) 00:12:58.764 [2024-11-27T06:08:09.969Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:58.764 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:58.764 Nvme1n1 : 5.01 19391.12 151.49 0.00 0.00 6594.70 2785.28 16711.68 00:12:58.764 [2024-11-27T06:08:09.969Z] =================================================================================================================== 00:12:58.764 [2024-11-27T06:08:09.969Z] Total : 19391.12 151.49 0.00 0.00 6594.70 2785.28 16711.68 00:12:58.764 [2024-11-27 07:08:09.851847] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.764 [2024-11-27 07:08:09.851860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.764 [2024-11-27 07:08:09.863883] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.764 [2024-11-27 07:08:09.863896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.764 [2024-11-27 07:08:09.875909] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.764 [2024-11-27 07:08:09.875921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.764 [2024-11-27 07:08:09.887941] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.764 [2024-11-27 07:08:09.887960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.764 [2024-11-27 07:08:09.899968] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.764 [2024-11-27 07:08:09.899978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.764 [2024-11-27 07:08:09.911996] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.764 [2024-11-27 07:08:09.912006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.764 [2024-11-27 07:08:09.924027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.764 [2024-11-27 07:08:09.924036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.764 [2024-11-27 07:08:09.936056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.764 [2024-11-27 07:08:09.936065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.764 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2239693) - No such process 00:12:58.764 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2239693 00:12:58.764 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.764 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.764 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:58.764 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.764 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:58.764 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.764 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:58.764 delay0 00:12:58.764 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.764 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:58.764 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.764 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:59.025 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.025 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:12:59.025 [2024-11-27 07:08:10.107561] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:07.167 Initializing NVMe Controllers 00:13:07.167 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:07.167 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:07.167 Initialization complete. Launching workers. 00:13:07.167 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 261, failed: 26772 00:13:07.167 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 26937, failed to submit 96 00:13:07.167 success 26823, unsuccessful 114, failed 0 00:13:07.167 07:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:07.167 07:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:07.167 07:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:07.167 07:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:13:07.167 07:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:07.167 07:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:13:07.167 07:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:07.167 07:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:07.167 rmmod nvme_tcp 00:13:07.167 rmmod nvme_fabrics 00:13:07.167 rmmod nvme_keyring 00:13:07.167 07:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:07.167 07:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:13:07.167 07:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:13:07.167 07:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2237451 ']' 00:13:07.167 07:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2237451 00:13:07.167 07:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2237451 ']' 00:13:07.167 07:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2237451 00:13:07.167 07:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:13:07.167 07:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:07.167 07:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2237451 00:13:07.167 07:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:07.167 07:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:07.167 07:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2237451' 00:13:07.167 killing process with pid 2237451 00:13:07.167 07:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2237451 00:13:07.167 07:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2237451 00:13:07.167 07:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:07.167 07:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:07.167 07:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:07.167 07:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:13:07.167 07:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:13:07.167 07:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:07.167 07:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:13:07.167 07:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:07.167 07:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:07.167 07:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.167 07:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:07.167 07:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.557 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:08.557 00:13:08.557 real 0m34.661s 00:13:08.557 user 0m45.482s 00:13:08.557 sys 0m12.166s 00:13:08.557 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:08.557 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:08.557 ************************************ 00:13:08.557 END TEST nvmf_zcopy 00:13:08.557 ************************************ 00:13:08.557 07:08:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:08.557 07:08:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:08.557 07:08:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.557 07:08:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:08.557 ************************************ 00:13:08.557 START TEST nvmf_nmic 00:13:08.557 ************************************ 00:13:08.557 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:08.820 * Looking for test storage... 00:13:08.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:08.820 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:08.820 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:13:08.820 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:08.820 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:08.820 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:08.820 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:08.820 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:08.820 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:13:08.820 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:13:08.820 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:13:08.820 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:13:08.820 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:13:08.820 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:13:08.820 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:13:08.820 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:08.820 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:13:08.820 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:13:08.820 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:08.820 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:08.820 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:13:08.820 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:13:08.820 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:08.820 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:13:08.820 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:13:08.820 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:13:08.820 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:13:08.820 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:08.820 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:13:08.820 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:13:08.820 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:08.820 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:08.820 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:08.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.821 --rc genhtml_branch_coverage=1 00:13:08.821 --rc genhtml_function_coverage=1 00:13:08.821 --rc genhtml_legend=1 00:13:08.821 --rc geninfo_all_blocks=1 00:13:08.821 --rc geninfo_unexecuted_blocks=1 00:13:08.821 00:13:08.821 ' 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:08.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.821 --rc genhtml_branch_coverage=1 00:13:08.821 --rc genhtml_function_coverage=1 00:13:08.821 --rc genhtml_legend=1 00:13:08.821 --rc geninfo_all_blocks=1 00:13:08.821 --rc geninfo_unexecuted_blocks=1 00:13:08.821 00:13:08.821 ' 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:08.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.821 --rc genhtml_branch_coverage=1 00:13:08.821 --rc genhtml_function_coverage=1 00:13:08.821 --rc genhtml_legend=1 00:13:08.821 --rc geninfo_all_blocks=1 00:13:08.821 --rc geninfo_unexecuted_blocks=1 00:13:08.821 00:13:08.821 ' 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:08.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.821 --rc genhtml_branch_coverage=1 00:13:08.821 --rc genhtml_function_coverage=1 00:13:08.821 --rc genhtml_legend=1 00:13:08.821 --rc geninfo_all_blocks=1 00:13:08.821 --rc geninfo_unexecuted_blocks=1 00:13:08.821 00:13:08.821 ' 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:08.821 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:08.821 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:08.822 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.822 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:08.822 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.822 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:08.822 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:08.822 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:13:08.822 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:16.975 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:16.975 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:13:16.975 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:16.975 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:16.975 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:16.975 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:16.975 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:16.975 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:13:16.975 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:16.975 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:13:16.975 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:13:16.975 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:13:16.975 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:13:16.975 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:13:16.975 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:13:16.975 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:16.975 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:16.975 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:16.975 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:16.975 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:16.975 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:16.975 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:16.975 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:16.975 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:16.975 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:16.975 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:16.975 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:16.975 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:16.975 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:16.975 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:16.975 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:16.976 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:16.976 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:16.976 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:16.976 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:16.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:16.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.698 ms 00:13:16.976 00:13:16.976 --- 10.0.0.2 ping statistics --- 00:13:16.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.976 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:16.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:16.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:13:16.976 00:13:16.976 --- 10.0.0.1 ping statistics --- 00:13:16.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.976 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2246632 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2246632 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2246632 ']' 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:16.976 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:16.976 [2024-11-27 07:08:27.480312] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:13:16.977 [2024-11-27 07:08:27.480379] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.977 [2024-11-27 07:08:27.580773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:16.977 [2024-11-27 07:08:27.635491] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:16.977 [2024-11-27 07:08:27.635543] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:16.977 [2024-11-27 07:08:27.635552] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:16.977 [2024-11-27 07:08:27.635559] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:16.977 [2024-11-27 07:08:27.635565] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:16.977 [2024-11-27 07:08:27.637575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.977 [2024-11-27 07:08:27.637735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:16.977 [2024-11-27 07:08:27.637897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.977 [2024-11-27 07:08:27.637896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:17.238 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:17.238 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:13:17.238 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:17.238 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:17.238 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:17.238 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:17.238 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:17.238 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.238 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:17.238 [2024-11-27 07:08:28.363601] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:17.238 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.238 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:17.238 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.238 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:17.238 Malloc0 00:13:17.238 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.238 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:17.238 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.238 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:17.238 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.238 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:17.238 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.238 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:17.238 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.238 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.238 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.238 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:17.238 [2024-11-27 07:08:28.437921] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.500 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.500 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:17.500 test case1: single bdev can't be used in multiple subsystems 00:13:17.500 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:17.500 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.500 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:17.500 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.500 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:17.500 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.500 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:17.500 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.500 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:17.500 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:17.500 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.500 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:17.500 [2024-11-27 07:08:28.473746] bdev.c:8507:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:17.500 [2024-11-27 07:08:28.473777] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:17.500 [2024-11-27 07:08:28.473786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.500 request: 00:13:17.500 { 00:13:17.500 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:17.500 "namespace": { 00:13:17.500 "bdev_name": "Malloc0", 00:13:17.500 "no_auto_visible": false, 00:13:17.500 "hide_metadata": false 00:13:17.500 }, 00:13:17.500 "method": "nvmf_subsystem_add_ns", 00:13:17.500 "req_id": 1 00:13:17.500 } 00:13:17.500 Got JSON-RPC error response 00:13:17.500 response: 00:13:17.500 { 00:13:17.500 "code": -32602, 00:13:17.500 "message": "Invalid parameters" 00:13:17.500 } 00:13:17.500 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:17.500 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:17.500 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:17.500 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:17.500 Adding namespace failed - expected result. 00:13:17.500 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:17.500 test case2: host connect to nvmf target in multiple paths 00:13:17.500 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:17.500 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.500 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:17.500 [2024-11-27 07:08:28.485948] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:17.500 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.500 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:18.889 07:08:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:20.804 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:20.804 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:13:20.804 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:20.804 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:20.804 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:13:22.745 07:08:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:22.745 07:08:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:22.745 07:08:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:22.746 07:08:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:22.746 07:08:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:22.746 07:08:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:13:22.746 07:08:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:22.746 [global] 00:13:22.746 thread=1 00:13:22.746 invalidate=1 00:13:22.746 rw=write 00:13:22.746 time_based=1 00:13:22.746 runtime=1 00:13:22.746 ioengine=libaio 00:13:22.746 direct=1 00:13:22.746 bs=4096 00:13:22.746 iodepth=1 00:13:22.746 norandommap=0 00:13:22.746 numjobs=1 00:13:22.746 00:13:22.746 verify_dump=1 00:13:22.746 verify_backlog=512 00:13:22.746 verify_state_save=0 00:13:22.746 do_verify=1 00:13:22.746 verify=crc32c-intel 00:13:22.746 [job0] 00:13:22.746 filename=/dev/nvme0n1 00:13:22.746 Could not set queue depth (nvme0n1) 00:13:23.007 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:23.007 fio-3.35 00:13:23.007 Starting 1 thread 00:13:23.948 00:13:23.948 job0: (groupid=0, jobs=1): err= 0: pid=2247966: Wed Nov 27 07:08:35 2024 00:13:23.949 read: IOPS=638, BW=2555KiB/s (2616kB/s)(2560KiB/1002msec) 00:13:23.949 slat (nsec): min=6811, max=46706, avg=24741.83, stdev=5765.18 00:13:23.949 clat (usec): min=314, max=41962, avg=869.57, stdev=2792.39 00:13:23.949 lat (usec): min=340, max=41989, avg=894.31, stdev=2792.57 00:13:23.949 clat percentiles (usec): 00:13:23.949 | 1.00th=[ 388], 5.00th=[ 445], 10.00th=[ 502], 20.00th=[ 570], 00:13:23.949 | 30.00th=[ 627], 40.00th=[ 652], 50.00th=[ 693], 60.00th=[ 725], 00:13:23.949 | 70.00th=[ 758], 80.00th=[ 799], 90.00th=[ 824], 95.00th=[ 848], 00:13:23.949 | 99.00th=[ 898], 99.50th=[ 938], 99.90th=[42206], 99.95th=[42206], 00:13:23.949 | 99.99th=[42206] 00:13:23.949 write: IOPS=1021, BW=4088KiB/s (4186kB/s)(4096KiB/1002msec); 0 zone resets 00:13:23.949 slat (usec): min=9, max=27238, avg=55.57, stdev=850.35 00:13:23.949 clat (usec): min=118, max=631, avg=351.96, stdev=91.78 00:13:23.949 lat (usec): min=129, max=27668, avg=407.53, stdev=857.87 00:13:23.949 clat percentiles (usec): 00:13:23.949 | 1.00th=[ 196], 5.00th=[ 210], 10.00th=[ 227], 20.00th=[ 293], 00:13:23.949 | 30.00th=[ 302], 40.00th=[ 318], 50.00th=[ 330], 60.00th=[ 371], 00:13:23.949 | 70.00th=[ 404], 80.00th=[ 424], 90.00th=[ 486], 95.00th=[ 519], 00:13:23.949 | 99.00th=[ 578], 99.50th=[ 611], 99.90th=[ 611], 99.95th=[ 635], 00:13:23.949 | 99.99th=[ 635] 00:13:23.949 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=2 00:13:23.949 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:13:23.949 lat (usec) : 250=7.99%, 500=51.92%, 750=27.28%, 1000=12.62% 00:13:23.949 lat (msec) : 50=0.18% 00:13:23.949 cpu : usr=2.60%, sys=4.40%, ctx=1669, majf=0, minf=1 00:13:23.949 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:23.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.949 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.949 issued rwts: total=640,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:23.949 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:23.949 00:13:23.949 Run status group 0 (all jobs): 00:13:23.949 READ: bw=2555KiB/s (2616kB/s), 2555KiB/s-2555KiB/s (2616kB/s-2616kB/s), io=2560KiB (2621kB), run=1002-1002msec 00:13:23.949 WRITE: bw=4088KiB/s (4186kB/s), 4088KiB/s-4088KiB/s (4186kB/s-4186kB/s), io=4096KiB (4194kB), run=1002-1002msec 00:13:23.949 00:13:23.949 Disk stats (read/write): 00:13:23.949 nvme0n1: ios=662/1024, merge=0/0, ticks=1391/354, in_queue=1745, util=98.90% 00:13:23.949 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:24.209 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:24.209 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:24.209 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:13:24.209 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:24.209 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:24.209 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:24.209 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:24.209 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:13:24.209 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:24.209 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:24.209 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:24.209 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:13:24.209 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:24.209 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:13:24.209 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:24.209 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:24.209 rmmod nvme_tcp 00:13:24.209 rmmod nvme_fabrics 00:13:24.209 rmmod nvme_keyring 00:13:24.209 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:24.209 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:13:24.209 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:13:24.209 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2246632 ']' 00:13:24.209 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2246632 00:13:24.209 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2246632 ']' 00:13:24.209 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2246632 00:13:24.209 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:13:24.209 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:24.469 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2246632 00:13:24.469 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:24.470 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:24.470 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2246632' 00:13:24.470 killing process with pid 2246632 00:13:24.470 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2246632 00:13:24.470 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2246632 00:13:24.470 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:24.470 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:24.470 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:24.470 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:13:24.470 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:13:24.470 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:24.470 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:13:24.470 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:24.470 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:24.470 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.470 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:24.470 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:27.017 00:13:27.017 real 0m17.998s 00:13:27.017 user 0m48.109s 00:13:27.017 sys 0m6.568s 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:27.017 ************************************ 00:13:27.017 END TEST nvmf_nmic 00:13:27.017 ************************************ 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:27.017 ************************************ 00:13:27.017 START TEST nvmf_fio_target 00:13:27.017 ************************************ 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:27.017 * Looking for test storage... 00:13:27.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:27.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.017 --rc genhtml_branch_coverage=1 00:13:27.017 --rc genhtml_function_coverage=1 00:13:27.017 --rc genhtml_legend=1 00:13:27.017 --rc geninfo_all_blocks=1 00:13:27.017 --rc geninfo_unexecuted_blocks=1 00:13:27.017 00:13:27.017 ' 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:27.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.017 --rc genhtml_branch_coverage=1 00:13:27.017 --rc genhtml_function_coverage=1 00:13:27.017 --rc genhtml_legend=1 00:13:27.017 --rc geninfo_all_blocks=1 00:13:27.017 --rc geninfo_unexecuted_blocks=1 00:13:27.017 00:13:27.017 ' 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:27.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.017 --rc genhtml_branch_coverage=1 00:13:27.017 --rc genhtml_function_coverage=1 00:13:27.017 --rc genhtml_legend=1 00:13:27.017 --rc geninfo_all_blocks=1 00:13:27.017 --rc geninfo_unexecuted_blocks=1 00:13:27.017 00:13:27.017 ' 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:27.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.017 --rc genhtml_branch_coverage=1 00:13:27.017 --rc genhtml_function_coverage=1 00:13:27.017 --rc genhtml_legend=1 00:13:27.017 --rc geninfo_all_blocks=1 00:13:27.017 --rc geninfo_unexecuted_blocks=1 00:13:27.017 00:13:27.017 ' 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.017 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.018 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.018 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.018 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.018 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.018 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:27.018 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.018 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:13:27.018 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:27.018 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:27.018 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:27.018 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:27.018 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:27.018 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:27.018 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:27.018 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:27.018 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:27.018 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:27.018 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:27.018 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:27.018 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:27.018 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:27.018 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:27.018 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:27.018 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:27.018 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:27.018 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:27.018 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.018 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:27.018 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.018 07:08:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:27.018 07:08:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:27.018 07:08:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:13:27.018 07:08:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:35.162 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:35.162 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:35.162 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:35.162 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:35.162 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:35.163 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:35.163 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:35.163 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:35.163 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:35.163 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:35.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:35.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:13:35.163 00:13:35.163 --- 10.0.0.2 ping statistics --- 00:13:35.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.163 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:13:35.163 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:35.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:35.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:13:35.163 00:13:35.163 --- 10.0.0.1 ping statistics --- 00:13:35.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.163 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:13:35.163 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:35.163 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:13:35.163 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:35.163 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:35.163 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:35.163 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:35.163 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:35.163 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:35.163 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:35.163 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:35.163 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:35.163 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:35.163 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.163 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2252612 00:13:35.163 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2252612 00:13:35.163 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:35.163 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2252612 ']' 00:13:35.163 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.163 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:35.163 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.163 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:35.163 07:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.163 [2024-11-27 07:08:45.613369] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:13:35.163 [2024-11-27 07:08:45.613437] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:35.163 [2024-11-27 07:08:45.712857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:35.163 [2024-11-27 07:08:45.765651] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:35.163 [2024-11-27 07:08:45.765700] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:35.163 [2024-11-27 07:08:45.765708] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:35.163 [2024-11-27 07:08:45.765715] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:35.163 [2024-11-27 07:08:45.765722] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:35.163 [2024-11-27 07:08:45.768030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:35.163 [2024-11-27 07:08:45.768218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:35.163 [2024-11-27 07:08:45.768384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:35.163 [2024-11-27 07:08:45.768385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.426 07:08:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:35.426 07:08:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:13:35.426 07:08:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:35.426 07:08:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:35.426 07:08:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.426 07:08:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:35.426 07:08:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:35.687 [2024-11-27 07:08:46.642437] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:35.687 07:08:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:35.949 07:08:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:35.949 07:08:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:35.949 07:08:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:35.949 07:08:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:36.210 07:08:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:36.210 07:08:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:36.471 07:08:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:36.471 07:08:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:36.732 07:08:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:36.994 07:08:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:36.994 07:08:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:36.994 07:08:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:36.994 07:08:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:37.255 07:08:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:37.255 07:08:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:37.548 07:08:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:37.548 07:08:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:37.548 07:08:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:37.809 07:08:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:37.809 07:08:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:38.070 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:38.070 [2024-11-27 07:08:49.250096] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:38.331 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:38.331 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:38.592 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:39.975 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:39.975 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:13:39.975 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:39.975 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:13:39.975 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:13:39.975 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:13:42.519 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:42.519 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:42.519 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:42.519 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:13:42.519 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:42.519 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:13:42.519 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:42.520 [global] 00:13:42.520 thread=1 00:13:42.520 invalidate=1 00:13:42.520 rw=write 00:13:42.520 time_based=1 00:13:42.520 runtime=1 00:13:42.520 ioengine=libaio 00:13:42.520 direct=1 00:13:42.520 bs=4096 00:13:42.520 iodepth=1 00:13:42.520 norandommap=0 00:13:42.520 numjobs=1 00:13:42.520 00:13:42.520 verify_dump=1 00:13:42.520 verify_backlog=512 00:13:42.520 verify_state_save=0 00:13:42.520 do_verify=1 00:13:42.520 verify=crc32c-intel 00:13:42.520 [job0] 00:13:42.520 filename=/dev/nvme0n1 00:13:42.520 [job1] 00:13:42.520 filename=/dev/nvme0n2 00:13:42.520 [job2] 00:13:42.520 filename=/dev/nvme0n3 00:13:42.520 [job3] 00:13:42.520 filename=/dev/nvme0n4 00:13:42.520 Could not set queue depth (nvme0n1) 00:13:42.520 Could not set queue depth (nvme0n2) 00:13:42.520 Could not set queue depth (nvme0n3) 00:13:42.520 Could not set queue depth (nvme0n4) 00:13:42.520 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:42.520 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:42.520 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:42.520 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:42.520 fio-3.35 00:13:42.520 Starting 4 threads 00:13:43.905 00:13:43.905 job0: (groupid=0, jobs=1): err= 0: pid=2254477: Wed Nov 27 07:08:54 2024 00:13:43.905 read: IOPS=17, BW=70.1KiB/s (71.8kB/s)(72.0KiB/1027msec) 00:13:43.905 slat (nsec): min=25974, max=26778, avg=26255.17, stdev=174.66 00:13:43.905 clat (usec): min=1091, max=42098, avg=39556.78, stdev=9605.54 00:13:43.905 lat (usec): min=1117, max=42125, avg=39583.04, stdev=9605.58 00:13:43.905 clat percentiles (usec): 00:13:43.906 | 1.00th=[ 1090], 5.00th=[ 1090], 10.00th=[40633], 20.00th=[41681], 00:13:43.906 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:13:43.906 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:43.906 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:43.906 | 99.99th=[42206] 00:13:43.906 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:13:43.906 slat (usec): min=4, max=21837, avg=61.72, stdev=964.38 00:13:43.906 clat (usec): min=272, max=809, avg=548.17, stdev=112.71 00:13:43.906 lat (usec): min=277, max=22418, avg=609.89, stdev=973.68 00:13:43.906 clat percentiles (usec): 00:13:43.906 | 1.00th=[ 343], 5.00th=[ 379], 10.00th=[ 412], 20.00th=[ 453], 00:13:43.906 | 30.00th=[ 469], 40.00th=[ 498], 50.00th=[ 529], 60.00th=[ 570], 00:13:43.906 | 70.00th=[ 619], 80.00th=[ 668], 90.00th=[ 717], 95.00th=[ 742], 00:13:43.906 | 99.00th=[ 791], 99.50th=[ 799], 99.90th=[ 807], 99.95th=[ 807], 00:13:43.906 | 99.99th=[ 807] 00:13:43.906 bw ( KiB/s): min= 4096, max= 4096, per=47.03%, avg=4096.00, stdev= 0.00, samples=1 00:13:43.906 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:43.906 lat (usec) : 500=39.43%, 750=53.77%, 1000=3.40% 00:13:43.906 lat (msec) : 2=0.19%, 50=3.21% 00:13:43.906 cpu : usr=0.68%, sys=0.78%, ctx=533, majf=0, minf=1 00:13:43.906 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:43.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.906 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:43.906 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:43.906 job1: (groupid=0, jobs=1): err= 0: pid=2254512: Wed Nov 27 07:08:54 2024 00:13:43.906 read: IOPS=357, BW=1429KiB/s (1463kB/s)(1480KiB/1036msec) 00:13:43.906 slat (nsec): min=26933, max=64296, avg=28311.54, stdev=3667.84 00:13:43.906 clat (usec): min=760, max=41956, avg=1757.74, stdev=5528.62 00:13:43.906 lat (usec): min=788, max=41983, avg=1786.06, stdev=5528.47 00:13:43.906 clat percentiles (usec): 00:13:43.906 | 1.00th=[ 832], 5.00th=[ 898], 10.00th=[ 922], 20.00th=[ 947], 00:13:43.906 | 30.00th=[ 971], 40.00th=[ 979], 50.00th=[ 988], 60.00th=[ 1004], 00:13:43.906 | 70.00th=[ 1029], 80.00th=[ 1045], 90.00th=[ 1074], 95.00th=[ 1090], 00:13:43.906 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:43.906 | 99.99th=[42206] 00:13:43.906 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:13:43.906 slat (usec): min=9, max=25153, avg=82.79, stdev=1110.19 00:13:43.906 clat (usec): min=266, max=982, avg=634.68, stdev=126.01 00:13:43.906 lat (usec): min=278, max=25940, avg=717.47, stdev=1124.30 00:13:43.906 clat percentiles (usec): 00:13:43.906 | 1.00th=[ 355], 5.00th=[ 416], 10.00th=[ 453], 20.00th=[ 529], 00:13:43.906 | 30.00th=[ 578], 40.00th=[ 611], 50.00th=[ 644], 60.00th=[ 676], 00:13:43.906 | 70.00th=[ 709], 80.00th=[ 742], 90.00th=[ 783], 95.00th=[ 824], 00:13:43.906 | 99.00th=[ 914], 99.50th=[ 930], 99.90th=[ 979], 99.95th=[ 979], 00:13:43.906 | 99.99th=[ 979] 00:13:43.906 bw ( KiB/s): min= 4096, max= 4096, per=47.03%, avg=4096.00, stdev= 0.00, samples=1 00:13:43.906 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:43.906 lat (usec) : 500=9.75%, 750=38.44%, 1000=34.01% 00:13:43.906 lat (msec) : 2=17.01%, 50=0.79% 00:13:43.906 cpu : usr=2.61%, sys=2.80%, ctx=884, majf=0, minf=1 00:13:43.906 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:43.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.906 issued rwts: total=370,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:43.906 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:43.906 job2: (groupid=0, jobs=1): err= 0: pid=2254534: Wed Nov 27 07:08:54 2024 00:13:43.906 read: IOPS=17, BW=69.4KiB/s (71.0kB/s)(72.0KiB/1038msec) 00:13:43.906 slat (nsec): min=25224, max=25941, avg=25490.06, stdev=193.88 00:13:43.906 clat (usec): min=920, max=42145, avg=39404.44, stdev=9614.37 00:13:43.906 lat (usec): min=945, max=42171, avg=39429.93, stdev=9614.27 00:13:43.906 clat percentiles (usec): 00:13:43.906 | 1.00th=[ 922], 5.00th=[ 922], 10.00th=[41157], 20.00th=[41157], 00:13:43.906 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:13:43.906 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:43.906 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:43.906 | 99.99th=[42206] 00:13:43.906 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:13:43.906 slat (nsec): min=9953, max=52737, avg=30427.17, stdev=8274.04 00:13:43.906 clat (usec): min=269, max=1003, avg=604.12, stdev=128.62 00:13:43.906 lat (usec): min=280, max=1035, avg=634.54, stdev=130.86 00:13:43.906 clat percentiles (usec): 00:13:43.906 | 1.00th=[ 310], 5.00th=[ 383], 10.00th=[ 437], 20.00th=[ 494], 00:13:43.906 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 611], 60.00th=[ 644], 00:13:43.906 | 70.00th=[ 676], 80.00th=[ 717], 90.00th=[ 766], 95.00th=[ 807], 00:13:43.906 | 99.00th=[ 865], 99.50th=[ 938], 99.90th=[ 1004], 99.95th=[ 1004], 00:13:43.906 | 99.99th=[ 1004] 00:13:43.906 bw ( KiB/s): min= 4096, max= 4096, per=47.03%, avg=4096.00, stdev= 0.00, samples=1 00:13:43.906 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:43.906 lat (usec) : 500=20.38%, 750=63.58%, 1000=12.64% 00:13:43.906 lat (msec) : 2=0.19%, 50=3.21% 00:13:43.906 cpu : usr=0.68%, sys=1.54%, ctx=530, majf=0, minf=2 00:13:43.906 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:43.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.906 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:43.906 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:43.906 job3: (groupid=0, jobs=1): err= 0: pid=2254535: Wed Nov 27 07:08:54 2024 00:13:43.906 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:13:43.906 slat (nsec): min=8590, max=44720, avg=26463.54, stdev=1757.63 00:13:43.906 clat (usec): min=738, max=1136, avg=969.80, stdev=62.93 00:13:43.906 lat (usec): min=765, max=1162, avg=996.27, stdev=62.98 00:13:43.906 clat percentiles (usec): 00:13:43.906 | 1.00th=[ 766], 5.00th=[ 848], 10.00th=[ 881], 20.00th=[ 930], 00:13:43.906 | 30.00th=[ 955], 40.00th=[ 971], 50.00th=[ 979], 60.00th=[ 988], 00:13:43.906 | 70.00th=[ 1004], 80.00th=[ 1020], 90.00th=[ 1037], 95.00th=[ 1057], 00:13:43.906 | 99.00th=[ 1106], 99.50th=[ 1106], 99.90th=[ 1139], 99.95th=[ 1139], 00:13:43.906 | 99.99th=[ 1139] 00:13:43.906 write: IOPS=723, BW=2893KiB/s (2963kB/s)(2896KiB/1001msec); 0 zone resets 00:13:43.906 slat (usec): min=10, max=24880, avg=65.87, stdev=923.56 00:13:43.906 clat (usec): min=221, max=919, avg=597.57, stdev=112.39 00:13:43.906 lat (usec): min=233, max=25595, avg=663.44, stdev=935.15 00:13:43.906 clat percentiles (usec): 00:13:43.906 | 1.00th=[ 318], 5.00th=[ 408], 10.00th=[ 449], 20.00th=[ 490], 00:13:43.906 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 635], 00:13:43.906 | 70.00th=[ 676], 80.00th=[ 701], 90.00th=[ 725], 95.00th=[ 758], 00:13:43.906 | 99.00th=[ 824], 99.50th=[ 840], 99.90th=[ 922], 99.95th=[ 922], 00:13:43.906 | 99.99th=[ 922] 00:13:43.906 bw ( KiB/s): min= 4096, max= 4096, per=47.03%, avg=4096.00, stdev= 0.00, samples=1 00:13:43.906 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:43.906 lat (usec) : 250=0.24%, 500=12.70%, 750=42.31%, 1000=31.80% 00:13:43.906 lat (msec) : 2=12.94% 00:13:43.906 cpu : usr=1.60%, sys=3.90%, ctx=1238, majf=0, minf=1 00:13:43.906 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:43.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.906 issued rwts: total=512,724,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:43.906 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:43.906 00:13:43.906 Run status group 0 (all jobs): 00:13:43.906 READ: bw=3538KiB/s (3622kB/s), 69.4KiB/s-2046KiB/s (71.0kB/s-2095kB/s), io=3672KiB (3760kB), run=1001-1038msec 00:13:43.906 WRITE: bw=8709KiB/s (8918kB/s), 1973KiB/s-2893KiB/s (2020kB/s-2963kB/s), io=9040KiB (9257kB), run=1001-1038msec 00:13:43.906 00:13:43.906 Disk stats (read/write): 00:13:43.906 nvme0n1: ios=46/512, merge=0/0, ticks=1782/277, in_queue=2059, util=88.45% 00:13:43.906 nvme0n2: ios=397/512, merge=0/0, ticks=1694/283, in_queue=1977, util=91.65% 00:13:43.906 nvme0n3: ios=73/512, merge=0/0, ticks=743/292, in_queue=1035, util=92.63% 00:13:43.906 nvme0n4: ios=426/512, merge=0/0, ticks=1240/303, in_queue=1543, util=98.50% 00:13:43.906 07:08:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:43.906 [global] 00:13:43.906 thread=1 00:13:43.906 invalidate=1 00:13:43.906 rw=randwrite 00:13:43.906 time_based=1 00:13:43.906 runtime=1 00:13:43.906 ioengine=libaio 00:13:43.906 direct=1 00:13:43.906 bs=4096 00:13:43.906 iodepth=1 00:13:43.906 norandommap=0 00:13:43.906 numjobs=1 00:13:43.906 00:13:43.906 verify_dump=1 00:13:43.906 verify_backlog=512 00:13:43.906 verify_state_save=0 00:13:43.906 do_verify=1 00:13:43.906 verify=crc32c-intel 00:13:43.906 [job0] 00:13:43.906 filename=/dev/nvme0n1 00:13:43.906 [job1] 00:13:43.906 filename=/dev/nvme0n2 00:13:43.906 [job2] 00:13:43.906 filename=/dev/nvme0n3 00:13:43.906 [job3] 00:13:43.906 filename=/dev/nvme0n4 00:13:43.906 Could not set queue depth (nvme0n1) 00:13:43.906 Could not set queue depth (nvme0n2) 00:13:43.906 Could not set queue depth (nvme0n3) 00:13:43.906 Could not set queue depth (nvme0n4) 00:13:44.167 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:44.167 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:44.167 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:44.167 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:44.167 fio-3.35 00:13:44.167 Starting 4 threads 00:13:45.552 00:13:45.552 job0: (groupid=0, jobs=1): err= 0: pid=2254995: Wed Nov 27 07:08:56 2024 00:13:45.552 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:13:45.552 slat (nsec): min=7372, max=61532, avg=27289.12, stdev=3330.71 00:13:45.552 clat (usec): min=649, max=1329, avg=1044.08, stdev=119.64 00:13:45.552 lat (usec): min=676, max=1355, avg=1071.37, stdev=119.86 00:13:45.552 clat percentiles (usec): 00:13:45.552 | 1.00th=[ 717], 5.00th=[ 824], 10.00th=[ 873], 20.00th=[ 947], 00:13:45.552 | 30.00th=[ 988], 40.00th=[ 1037], 50.00th=[ 1074], 60.00th=[ 1106], 00:13:45.552 | 70.00th=[ 1123], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1188], 00:13:45.552 | 99.00th=[ 1254], 99.50th=[ 1287], 99.90th=[ 1336], 99.95th=[ 1336], 00:13:45.552 | 99.99th=[ 1336] 00:13:45.552 write: IOPS=713, BW=2853KiB/s (2922kB/s)(2856KiB/1001msec); 0 zone resets 00:13:45.552 slat (nsec): min=8802, max=53632, avg=30442.59, stdev=8147.75 00:13:45.552 clat (usec): min=222, max=959, avg=588.11, stdev=119.00 00:13:45.552 lat (usec): min=234, max=994, avg=618.55, stdev=121.46 00:13:45.552 clat percentiles (usec): 00:13:45.552 | 1.00th=[ 289], 5.00th=[ 383], 10.00th=[ 437], 20.00th=[ 486], 00:13:45.552 | 30.00th=[ 529], 40.00th=[ 562], 50.00th=[ 594], 60.00th=[ 619], 00:13:45.552 | 70.00th=[ 652], 80.00th=[ 685], 90.00th=[ 742], 95.00th=[ 775], 00:13:45.552 | 99.00th=[ 832], 99.50th=[ 857], 99.90th=[ 963], 99.95th=[ 963], 00:13:45.552 | 99.99th=[ 963] 00:13:45.552 bw ( KiB/s): min= 4096, max= 4096, per=41.76%, avg=4096.00, stdev= 0.00, samples=1 00:13:45.552 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:45.552 lat (usec) : 250=0.16%, 500=13.30%, 750=40.70%, 1000=17.29% 00:13:45.552 lat (msec) : 2=28.55% 00:13:45.552 cpu : usr=2.50%, sys=5.00%, ctx=1227, majf=0, minf=1 00:13:45.552 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:45.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.552 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.552 issued rwts: total=512,714,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:45.552 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:45.552 job1: (groupid=0, jobs=1): err= 0: pid=2255015: Wed Nov 27 07:08:56 2024 00:13:45.552 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:13:45.552 slat (nsec): min=7813, max=46323, avg=27605.77, stdev=1787.15 00:13:45.552 clat (usec): min=619, max=1183, avg=954.82, stdev=70.93 00:13:45.552 lat (usec): min=647, max=1210, avg=982.43, stdev=70.81 00:13:45.552 clat percentiles (usec): 00:13:45.552 | 1.00th=[ 742], 5.00th=[ 824], 10.00th=[ 865], 20.00th=[ 914], 00:13:45.552 | 30.00th=[ 930], 40.00th=[ 947], 50.00th=[ 963], 60.00th=[ 979], 00:13:45.552 | 70.00th=[ 996], 80.00th=[ 1004], 90.00th=[ 1029], 95.00th=[ 1057], 00:13:45.552 | 99.00th=[ 1090], 99.50th=[ 1090], 99.90th=[ 1188], 99.95th=[ 1188], 00:13:45.552 | 99.99th=[ 1188] 00:13:45.552 write: IOPS=794, BW=3177KiB/s (3253kB/s)(3180KiB/1001msec); 0 zone resets 00:13:45.552 slat (nsec): min=9240, max=57099, avg=31181.31, stdev=9188.24 00:13:45.552 clat (usec): min=217, max=2112, avg=580.62, stdev=138.96 00:13:45.552 lat (usec): min=242, max=2160, avg=611.80, stdev=142.49 00:13:45.552 clat percentiles (usec): 00:13:45.552 | 1.00th=[ 281], 5.00th=[ 355], 10.00th=[ 416], 20.00th=[ 474], 00:13:45.552 | 30.00th=[ 519], 40.00th=[ 553], 50.00th=[ 586], 60.00th=[ 619], 00:13:45.553 | 70.00th=[ 660], 80.00th=[ 685], 90.00th=[ 725], 95.00th=[ 758], 00:13:45.553 | 99.00th=[ 824], 99.50th=[ 906], 99.90th=[ 2114], 99.95th=[ 2114], 00:13:45.553 | 99.99th=[ 2114] 00:13:45.553 bw ( KiB/s): min= 4096, max= 4096, per=41.76%, avg=4096.00, stdev= 0.00, samples=1 00:13:45.553 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:45.553 lat (usec) : 250=0.15%, 500=15.76%, 750=41.85%, 1000=32.59% 00:13:45.553 lat (msec) : 2=9.56%, 4=0.08% 00:13:45.553 cpu : usr=3.00%, sys=5.00%, ctx=1308, majf=0, minf=1 00:13:45.553 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:45.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.553 issued rwts: total=512,795,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:45.553 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:45.553 job2: (groupid=0, jobs=1): err= 0: pid=2255038: Wed Nov 27 07:08:56 2024 00:13:45.553 read: IOPS=18, BW=73.6KiB/s (75.3kB/s)(76.0KiB/1033msec) 00:13:45.553 slat (nsec): min=27862, max=29439, avg=28375.32, stdev=352.97 00:13:45.553 clat (usec): min=1295, max=42117, avg=39188.58, stdev=9187.40 00:13:45.553 lat (usec): min=1324, max=42146, avg=39216.95, stdev=9187.29 00:13:45.553 clat percentiles (usec): 00:13:45.553 | 1.00th=[ 1303], 5.00th=[ 1303], 10.00th=[40633], 20.00th=[41157], 00:13:45.553 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:45.553 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:13:45.553 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:45.553 | 99.99th=[42206] 00:13:45.553 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:13:45.553 slat (nsec): min=9997, max=82393, avg=24992.23, stdev=11529.13 00:13:45.553 clat (usec): min=123, max=1026, avg=530.64, stdev=143.32 00:13:45.553 lat (usec): min=135, max=1061, avg=555.63, stdev=146.07 00:13:45.553 clat percentiles (usec): 00:13:45.553 | 1.00th=[ 215], 5.00th=[ 281], 10.00th=[ 351], 20.00th=[ 408], 00:13:45.553 | 30.00th=[ 465], 40.00th=[ 502], 50.00th=[ 529], 60.00th=[ 562], 00:13:45.553 | 70.00th=[ 611], 80.00th=[ 652], 90.00th=[ 717], 95.00th=[ 750], 00:13:45.553 | 99.00th=[ 857], 99.50th=[ 898], 99.90th=[ 1029], 99.95th=[ 1029], 00:13:45.553 | 99.99th=[ 1029] 00:13:45.553 bw ( KiB/s): min= 4096, max= 4096, per=41.76%, avg=4096.00, stdev= 0.00, samples=1 00:13:45.553 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:45.553 lat (usec) : 250=2.45%, 500=35.22%, 750=53.48%, 1000=5.08% 00:13:45.553 lat (msec) : 2=0.38%, 50=3.39% 00:13:45.553 cpu : usr=0.68%, sys=1.26%, ctx=534, majf=0, minf=1 00:13:45.553 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:45.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.553 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:45.553 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:45.553 job3: (groupid=0, jobs=1): err= 0: pid=2255045: Wed Nov 27 07:08:56 2024 00:13:45.553 read: IOPS=17, BW=69.7KiB/s (71.4kB/s)(72.0KiB/1033msec) 00:13:45.553 slat (nsec): min=27663, max=28418, avg=27930.89, stdev=208.43 00:13:45.553 clat (usec): min=907, max=42047, avg=39425.75, stdev=9621.52 00:13:45.553 lat (usec): min=935, max=42075, avg=39453.68, stdev=9621.55 00:13:45.553 clat percentiles (usec): 00:13:45.553 | 1.00th=[ 906], 5.00th=[ 906], 10.00th=[41157], 20.00th=[41157], 00:13:45.553 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:13:45.553 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:45.553 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:45.553 | 99.99th=[42206] 00:13:45.553 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:13:45.553 slat (nsec): min=9344, max=55570, avg=31839.72, stdev=9765.06 00:13:45.553 clat (usec): min=211, max=841, avg=590.44, stdev=112.68 00:13:45.553 lat (usec): min=221, max=893, avg=622.28, stdev=116.14 00:13:45.553 clat percentiles (usec): 00:13:45.553 | 1.00th=[ 297], 5.00th=[ 379], 10.00th=[ 433], 20.00th=[ 490], 00:13:45.553 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 627], 00:13:45.553 | 70.00th=[ 660], 80.00th=[ 693], 90.00th=[ 734], 95.00th=[ 750], 00:13:45.553 | 99.00th=[ 791], 99.50th=[ 807], 99.90th=[ 840], 99.95th=[ 840], 00:13:45.553 | 99.99th=[ 840] 00:13:45.553 bw ( KiB/s): min= 4096, max= 4096, per=41.76%, avg=4096.00, stdev= 0.00, samples=1 00:13:45.553 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:45.553 lat (usec) : 250=0.38%, 500=20.57%, 750=70.94%, 1000=4.91% 00:13:45.553 lat (msec) : 50=3.21% 00:13:45.553 cpu : usr=1.07%, sys=2.03%, ctx=531, majf=0, minf=1 00:13:45.553 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:45.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.553 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:45.553 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:45.553 00:13:45.553 Run status group 0 (all jobs): 00:13:45.553 READ: bw=4108KiB/s (4207kB/s), 69.7KiB/s-2046KiB/s (71.4kB/s-2095kB/s), io=4244KiB (4346kB), run=1001-1033msec 00:13:45.553 WRITE: bw=9808KiB/s (10.0MB/s), 1983KiB/s-3177KiB/s (2030kB/s-3253kB/s), io=9.89MiB (10.4MB), run=1001-1033msec 00:13:45.553 00:13:45.553 Disk stats (read/write): 00:13:45.553 nvme0n1: ios=531/512, merge=0/0, ticks=517/236, in_queue=753, util=87.17% 00:13:45.553 nvme0n2: ios=568/534, merge=0/0, ticks=576/232, in_queue=808, util=91.23% 00:13:45.553 nvme0n3: ios=70/512, merge=0/0, ticks=808/253, in_queue=1061, util=93.35% 00:13:45.553 nvme0n4: ios=35/512, merge=0/0, ticks=1379/225, in_queue=1604, util=94.13% 00:13:45.553 07:08:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:45.553 [global] 00:13:45.553 thread=1 00:13:45.553 invalidate=1 00:13:45.553 rw=write 00:13:45.553 time_based=1 00:13:45.553 runtime=1 00:13:45.553 ioengine=libaio 00:13:45.553 direct=1 00:13:45.553 bs=4096 00:13:45.553 iodepth=128 00:13:45.553 norandommap=0 00:13:45.553 numjobs=1 00:13:45.553 00:13:45.553 verify_dump=1 00:13:45.553 verify_backlog=512 00:13:45.553 verify_state_save=0 00:13:45.553 do_verify=1 00:13:45.553 verify=crc32c-intel 00:13:45.553 [job0] 00:13:45.553 filename=/dev/nvme0n1 00:13:45.553 [job1] 00:13:45.553 filename=/dev/nvme0n2 00:13:45.553 [job2] 00:13:45.553 filename=/dev/nvme0n3 00:13:45.553 [job3] 00:13:45.553 filename=/dev/nvme0n4 00:13:45.553 Could not set queue depth (nvme0n1) 00:13:45.553 Could not set queue depth (nvme0n2) 00:13:45.553 Could not set queue depth (nvme0n3) 00:13:45.553 Could not set queue depth (nvme0n4) 00:13:45.813 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:45.813 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:45.813 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:45.813 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:45.813 fio-3.35 00:13:45.813 Starting 4 threads 00:13:47.200 00:13:47.200 job0: (groupid=0, jobs=1): err= 0: pid=2255484: Wed Nov 27 07:08:58 2024 00:13:47.200 read: IOPS=3973, BW=15.5MiB/s (16.3MB/s)(15.6MiB/1004msec) 00:13:47.200 slat (nsec): min=962, max=17718k, avg=136400.51, stdev=955868.16 00:13:47.200 clat (usec): min=3516, max=79945, avg=16133.28, stdev=9665.38 00:13:47.200 lat (usec): min=3525, max=79952, avg=16269.68, stdev=9758.96 00:13:47.200 clat percentiles (usec): 00:13:47.200 | 1.00th=[ 6128], 5.00th=[ 7635], 10.00th=[ 8225], 20.00th=[ 9765], 00:13:47.200 | 30.00th=[11600], 40.00th=[12518], 50.00th=[12911], 60.00th=[15139], 00:13:47.200 | 70.00th=[17695], 80.00th=[20841], 90.00th=[25560], 95.00th=[32375], 00:13:47.200 | 99.00th=[55837], 99.50th=[71828], 99.90th=[80217], 99.95th=[80217], 00:13:47.200 | 99.99th=[80217] 00:13:47.200 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:13:47.200 slat (nsec): min=1642, max=12263k, avg=105108.00, stdev=669379.38 00:13:47.200 clat (usec): min=1133, max=79937, avg=15385.08, stdev=9575.64 00:13:47.200 lat (usec): min=1143, max=79945, avg=15490.19, stdev=9624.86 00:13:47.200 clat percentiles (usec): 00:13:47.200 | 1.00th=[ 4555], 5.00th=[ 7635], 10.00th=[ 8586], 20.00th=[10159], 00:13:47.200 | 30.00th=[11076], 40.00th=[11863], 50.00th=[12780], 60.00th=[14484], 00:13:47.200 | 70.00th=[15401], 80.00th=[18744], 90.00th=[22414], 95.00th=[27132], 00:13:47.200 | 99.00th=[65274], 99.50th=[66847], 99.90th=[67634], 99.95th=[80217], 00:13:47.200 | 99.99th=[80217] 00:13:47.200 bw ( KiB/s): min=16384, max=16384, per=17.23%, avg=16384.00, stdev= 0.00, samples=2 00:13:47.200 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:13:47.200 lat (msec) : 2=0.02%, 4=0.63%, 10=18.31%, 20=63.28%, 50=15.71% 00:13:47.200 lat (msec) : 100=2.05% 00:13:47.200 cpu : usr=3.19%, sys=4.99%, ctx=305, majf=0, minf=1 00:13:47.200 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:47.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.200 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:47.200 issued rwts: total=3989,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:47.200 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:47.200 job1: (groupid=0, jobs=1): err= 0: pid=2255504: Wed Nov 27 07:08:58 2024 00:13:47.200 read: IOPS=10.3k, BW=40.3MiB/s (42.3MB/s)(40.5MiB/1004msec) 00:13:47.200 slat (nsec): min=955, max=6172.6k, avg=47703.82, stdev=344097.06 00:13:47.200 clat (usec): min=1388, max=13410, avg=6575.19, stdev=1571.27 00:13:47.200 lat (usec): min=2093, max=13413, avg=6622.89, stdev=1586.70 00:13:47.200 clat percentiles (usec): 00:13:47.200 | 1.00th=[ 3654], 5.00th=[ 4490], 10.00th=[ 4948], 20.00th=[ 5407], 00:13:47.200 | 30.00th=[ 5604], 40.00th=[ 5866], 50.00th=[ 6325], 60.00th=[ 6718], 00:13:47.200 | 70.00th=[ 7177], 80.00th=[ 7701], 90.00th=[ 8586], 95.00th=[ 9634], 00:13:47.200 | 99.00th=[11731], 99.50th=[11863], 99.90th=[13042], 99.95th=[13173], 00:13:47.200 | 99.99th=[13435] 00:13:47.200 write: IOPS=10.7k, BW=41.8MiB/s (43.9MB/s)(42.0MiB/1004msec); 0 zone resets 00:13:47.200 slat (nsec): min=1616, max=5778.7k, avg=40407.47, stdev=271534.30 00:13:47.200 clat (usec): min=473, max=21725, avg=5505.78, stdev=1630.67 00:13:47.200 lat (usec): min=477, max=21727, avg=5546.19, stdev=1638.27 00:13:47.200 clat percentiles (usec): 00:13:47.200 | 1.00th=[ 1876], 5.00th=[ 2999], 10.00th=[ 3523], 20.00th=[ 4293], 00:13:47.200 | 30.00th=[ 5014], 40.00th=[ 5407], 50.00th=[ 5669], 60.00th=[ 5800], 00:13:47.200 | 70.00th=[ 6128], 80.00th=[ 6587], 90.00th=[ 6915], 95.00th=[ 7111], 00:13:47.200 | 99.00th=[ 9372], 99.50th=[13173], 99.90th=[20841], 99.95th=[21627], 00:13:47.200 | 99.99th=[21627] 00:13:47.200 bw ( KiB/s): min=40584, max=45416, per=45.21%, avg=43000.00, stdev=3416.74, samples=2 00:13:47.200 iops : min=10146, max=11354, avg=10750.00, stdev=854.18, samples=2 00:13:47.200 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.01% 00:13:47.200 lat (msec) : 2=0.66%, 4=8.25%, 10=88.70%, 20=2.28%, 50=0.07% 00:13:47.200 cpu : usr=8.18%, sys=9.47%, ctx=796, majf=0, minf=1 00:13:47.200 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:13:47.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.200 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:47.200 issued rwts: total=10366,10752,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:47.200 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:47.200 job2: (groupid=0, jobs=1): err= 0: pid=2255523: Wed Nov 27 07:08:58 2024 00:13:47.200 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:13:47.200 slat (nsec): min=1001, max=20921k, avg=114231.55, stdev=963232.94 00:13:47.200 clat (usec): min=6074, max=43324, avg=15346.78, stdev=6810.33 00:13:47.200 lat (usec): min=6084, max=63312, avg=15461.01, stdev=6906.39 00:13:47.200 clat percentiles (usec): 00:13:47.200 | 1.00th=[ 6652], 5.00th=[ 7439], 10.00th=[ 7963], 20.00th=[ 9372], 00:13:47.200 | 30.00th=[10945], 40.00th=[11600], 50.00th=[13698], 60.00th=[15139], 00:13:47.200 | 70.00th=[18220], 80.00th=[21365], 90.00th=[27657], 95.00th=[28705], 00:13:47.200 | 99.00th=[31589], 99.50th=[41157], 99.90th=[43254], 99.95th=[43254], 00:13:47.200 | 99.99th=[43254] 00:13:47.200 write: IOPS=3907, BW=15.3MiB/s (16.0MB/s)(15.3MiB/1005msec); 0 zone resets 00:13:47.200 slat (nsec): min=1652, max=13588k, avg=144780.34, stdev=931419.05 00:13:47.200 clat (usec): min=1186, max=119591, avg=18474.89, stdev=20192.18 00:13:47.200 lat (usec): min=1197, max=119599, avg=18619.67, stdev=20330.94 00:13:47.200 clat percentiles (msec): 00:13:47.200 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 10], 00:13:47.200 | 30.00th=[ 11], 40.00th=[ 13], 50.00th=[ 13], 60.00th=[ 15], 00:13:47.200 | 70.00th=[ 16], 80.00th=[ 19], 90.00th=[ 32], 95.00th=[ 64], 00:13:47.200 | 99.00th=[ 114], 99.50th=[ 116], 99.90th=[ 120], 99.95th=[ 121], 00:13:47.200 | 99.99th=[ 121] 00:13:47.200 bw ( KiB/s): min=12952, max=17448, per=15.98%, avg=15200.00, stdev=3179.15, samples=2 00:13:47.201 iops : min= 3238, max= 4362, avg=3800.00, stdev=794.79, samples=2 00:13:47.201 lat (msec) : 2=0.03%, 4=0.23%, 10=24.07%, 20=55.32%, 50=17.17% 00:13:47.201 lat (msec) : 100=1.70%, 250=1.48% 00:13:47.201 cpu : usr=3.29%, sys=4.18%, ctx=281, majf=0, minf=1 00:13:47.201 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:47.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.201 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:47.201 issued rwts: total=3584,3927,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:47.201 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:47.201 job3: (groupid=0, jobs=1): err= 0: pid=2255530: Wed Nov 27 07:08:58 2024 00:13:47.201 read: IOPS=4954, BW=19.4MiB/s (20.3MB/s)(19.4MiB/1004msec) 00:13:47.201 slat (nsec): min=978, max=23724k, avg=100970.25, stdev=833032.10 00:13:47.201 clat (usec): min=2329, max=84300, avg=13254.82, stdev=10144.36 00:13:47.201 lat (usec): min=2336, max=84308, avg=13355.79, stdev=10221.25 00:13:47.201 clat percentiles (usec): 00:13:47.201 | 1.00th=[ 3556], 5.00th=[ 4686], 10.00th=[ 6259], 20.00th=[ 8160], 00:13:47.201 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[11338], 00:13:47.201 | 70.00th=[12649], 80.00th=[16581], 90.00th=[25560], 95.00th=[31589], 00:13:47.201 | 99.00th=[55313], 99.50th=[69731], 99.90th=[84411], 99.95th=[84411], 00:13:47.201 | 99.99th=[84411] 00:13:47.201 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:13:47.201 slat (nsec): min=1658, max=28986k, avg=79724.00, stdev=680705.92 00:13:47.201 clat (usec): min=1732, max=84294, avg=11624.25, stdev=9409.29 00:13:47.201 lat (usec): min=1740, max=84328, avg=11703.98, stdev=9464.16 00:13:47.201 clat percentiles (usec): 00:13:47.201 | 1.00th=[ 3294], 5.00th=[ 5014], 10.00th=[ 7570], 20.00th=[ 8029], 00:13:47.201 | 30.00th=[ 8225], 40.00th=[ 8356], 50.00th=[ 8586], 60.00th=[ 9110], 00:13:47.201 | 70.00th=[10421], 80.00th=[11863], 90.00th=[19268], 95.00th=[23987], 00:13:47.201 | 99.00th=[65274], 99.50th=[66847], 99.90th=[69731], 99.95th=[69731], 00:13:47.201 | 99.99th=[84411] 00:13:47.201 bw ( KiB/s): min=16368, max=24592, per=21.53%, avg=20480.00, stdev=5815.25, samples=2 00:13:47.201 iops : min= 4092, max= 6148, avg=5120.00, stdev=1453.81, samples=2 00:13:47.201 lat (msec) : 2=0.09%, 4=2.87%, 10=59.39%, 20=26.38%, 50=9.46% 00:13:47.201 lat (msec) : 100=1.80% 00:13:47.201 cpu : usr=2.99%, sys=6.48%, ctx=465, majf=0, minf=1 00:13:47.201 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:47.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.201 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:47.201 issued rwts: total=4974,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:47.201 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:47.201 00:13:47.201 Run status group 0 (all jobs): 00:13:47.201 READ: bw=89.1MiB/s (93.4MB/s), 13.9MiB/s-40.3MiB/s (14.6MB/s-42.3MB/s), io=89.5MiB (93.9MB), run=1004-1005msec 00:13:47.201 WRITE: bw=92.9MiB/s (97.4MB/s), 15.3MiB/s-41.8MiB/s (16.0MB/s-43.9MB/s), io=93.3MiB (97.9MB), run=1004-1005msec 00:13:47.201 00:13:47.201 Disk stats (read/write): 00:13:47.201 nvme0n1: ios=3090/3551, merge=0/0, ticks=49587/50229, in_queue=99816, util=89.38% 00:13:47.201 nvme0n2: ios=8753/9039, merge=0/0, ticks=52331/45110, in_queue=97441, util=88.59% 00:13:47.201 nvme0n3: ios=2741/3072, merge=0/0, ticks=43077/57886, in_queue=100963, util=92.01% 00:13:47.201 nvme0n4: ios=3786/4096, merge=0/0, ticks=42692/38239, in_queue=80931, util=98.81% 00:13:47.201 07:08:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:47.201 [global] 00:13:47.201 thread=1 00:13:47.201 invalidate=1 00:13:47.201 rw=randwrite 00:13:47.201 time_based=1 00:13:47.201 runtime=1 00:13:47.201 ioengine=libaio 00:13:47.201 direct=1 00:13:47.201 bs=4096 00:13:47.201 iodepth=128 00:13:47.201 norandommap=0 00:13:47.201 numjobs=1 00:13:47.201 00:13:47.201 verify_dump=1 00:13:47.201 verify_backlog=512 00:13:47.201 verify_state_save=0 00:13:47.201 do_verify=1 00:13:47.201 verify=crc32c-intel 00:13:47.201 [job0] 00:13:47.201 filename=/dev/nvme0n1 00:13:47.201 [job1] 00:13:47.201 filename=/dev/nvme0n2 00:13:47.201 [job2] 00:13:47.201 filename=/dev/nvme0n3 00:13:47.201 [job3] 00:13:47.201 filename=/dev/nvme0n4 00:13:47.201 Could not set queue depth (nvme0n1) 00:13:47.201 Could not set queue depth (nvme0n2) 00:13:47.201 Could not set queue depth (nvme0n3) 00:13:47.201 Could not set queue depth (nvme0n4) 00:13:47.770 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:47.770 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:47.770 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:47.770 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:47.770 fio-3.35 00:13:47.770 Starting 4 threads 00:13:48.711 00:13:48.711 job0: (groupid=0, jobs=1): err= 0: pid=2255958: Wed Nov 27 07:08:59 2024 00:13:48.711 read: IOPS=7104, BW=27.8MiB/s (29.1MB/s)(28.0MiB/1009msec) 00:13:48.711 slat (nsec): min=990, max=8691.0k, avg=63151.50, stdev=449941.86 00:13:48.711 clat (usec): min=2443, max=30657, avg=8276.01, stdev=2950.41 00:13:48.711 lat (usec): min=2447, max=30659, avg=8339.16, stdev=2980.23 00:13:48.711 clat percentiles (usec): 00:13:48.711 | 1.00th=[ 3720], 5.00th=[ 4752], 10.00th=[ 5211], 20.00th=[ 6063], 00:13:48.711 | 30.00th=[ 6849], 40.00th=[ 7177], 50.00th=[ 7635], 60.00th=[ 7963], 00:13:48.711 | 70.00th=[ 9110], 80.00th=[10552], 90.00th=[11731], 95.00th=[13042], 00:13:48.711 | 99.00th=[17695], 99.50th=[25035], 99.90th=[27395], 99.95th=[30540], 00:13:48.711 | 99.99th=[30540] 00:13:48.711 write: IOPS=7252, BW=28.3MiB/s (29.7MB/s)(28.6MiB/1009msec); 0 zone resets 00:13:48.711 slat (nsec): min=1658, max=12420k, avg=69562.50, stdev=487576.80 00:13:48.711 clat (usec): min=1361, max=69772, avg=9356.02, stdev=10033.27 00:13:48.711 lat (usec): min=1369, max=69781, avg=9425.59, stdev=10098.88 00:13:48.711 clat percentiles (usec): 00:13:48.711 | 1.00th=[ 2868], 5.00th=[ 3458], 10.00th=[ 4015], 20.00th=[ 4883], 00:13:48.711 | 30.00th=[ 5735], 40.00th=[ 6390], 50.00th=[ 6652], 60.00th=[ 6849], 00:13:48.711 | 70.00th=[ 7504], 80.00th=[ 9634], 90.00th=[14877], 95.00th=[22414], 00:13:48.711 | 99.00th=[62653], 99.50th=[65799], 99.90th=[67634], 99.95th=[69731], 00:13:48.711 | 99.99th=[69731] 00:13:48.711 bw ( KiB/s): min=24760, max=32768, per=28.97%, avg=28764.00, stdev=5662.51, samples=2 00:13:48.711 iops : min= 6190, max= 8192, avg=7191.00, stdev=1415.63, samples=2 00:13:48.711 lat (msec) : 2=0.12%, 4=5.16%, 10=72.56%, 20=18.94%, 50=2.02% 00:13:48.711 lat (msec) : 100=1.19% 00:13:48.711 cpu : usr=5.65%, sys=6.75%, ctx=475, majf=0, minf=1 00:13:48.711 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:13:48.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.711 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:48.711 issued rwts: total=7168,7318,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.711 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:48.711 job1: (groupid=0, jobs=1): err= 0: pid=2255964: Wed Nov 27 07:08:59 2024 00:13:48.711 read: IOPS=6728, BW=26.3MiB/s (27.6MB/s)(26.5MiB/1008msec) 00:13:48.711 slat (nsec): min=1002, max=17144k, avg=64290.32, stdev=521708.10 00:13:48.711 clat (usec): min=2354, max=33239, avg=8382.53, stdev=3601.80 00:13:48.711 lat (usec): min=2797, max=33243, avg=8446.83, stdev=3648.16 00:13:48.711 clat percentiles (usec): 00:13:48.711 | 1.00th=[ 3621], 5.00th=[ 5080], 10.00th=[ 5473], 20.00th=[ 5932], 00:13:48.711 | 30.00th=[ 6521], 40.00th=[ 6915], 50.00th=[ 7504], 60.00th=[ 7963], 00:13:48.711 | 70.00th=[ 8356], 80.00th=[ 9372], 90.00th=[12911], 95.00th=[17957], 00:13:48.711 | 99.00th=[20317], 99.50th=[21890], 99.90th=[31589], 99.95th=[33162], 00:13:48.711 | 99.99th=[33162] 00:13:48.711 write: IOPS=7111, BW=27.8MiB/s (29.1MB/s)(28.0MiB/1008msec); 0 zone resets 00:13:48.711 slat (nsec): min=1660, max=12937k, avg=73756.88, stdev=545138.08 00:13:48.711 clat (usec): min=1153, max=78856, avg=9874.09, stdev=11168.05 00:13:48.711 lat (usec): min=1164, max=78864, avg=9947.85, stdev=11248.12 00:13:48.711 clat percentiles (usec): 00:13:48.711 | 1.00th=[ 3097], 5.00th=[ 3785], 10.00th=[ 4359], 20.00th=[ 5342], 00:13:48.711 | 30.00th=[ 5735], 40.00th=[ 6194], 50.00th=[ 6456], 60.00th=[ 6652], 00:13:48.711 | 70.00th=[ 7504], 80.00th=[10028], 90.00th=[15270], 95.00th=[29754], 00:13:48.711 | 99.00th=[74974], 99.50th=[77071], 99.90th=[78119], 99.95th=[79168], 00:13:48.711 | 99.99th=[79168] 00:13:48.712 bw ( KiB/s): min=27968, max=29360, per=28.87%, avg=28664.00, stdev=984.29, samples=2 00:13:48.712 iops : min= 6992, max= 7340, avg=7166.00, stdev=246.07, samples=2 00:13:48.712 lat (msec) : 2=0.01%, 4=3.73%, 10=77.39%, 20=14.36%, 50=3.43% 00:13:48.712 lat (msec) : 100=1.08% 00:13:48.712 cpu : usr=6.16%, sys=6.55%, ctx=442, majf=0, minf=1 00:13:48.712 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:13:48.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:48.712 issued rwts: total=6782,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.712 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:48.712 job2: (groupid=0, jobs=1): err= 0: pid=2255988: Wed Nov 27 07:08:59 2024 00:13:48.712 read: IOPS=6603, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1008msec) 00:13:48.712 slat (nsec): min=946, max=22165k, avg=73471.49, stdev=658899.37 00:13:48.712 clat (usec): min=2613, max=51635, avg=9999.41, stdev=5749.36 00:13:48.712 lat (usec): min=2623, max=51663, avg=10072.88, stdev=5802.24 00:13:48.712 clat percentiles (usec): 00:13:48.712 | 1.00th=[ 4113], 5.00th=[ 5604], 10.00th=[ 6587], 20.00th=[ 7767], 00:13:48.712 | 30.00th=[ 7963], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8717], 00:13:48.712 | 70.00th=[ 9110], 80.00th=[10028], 90.00th=[13829], 95.00th=[24511], 00:13:48.712 | 99.00th=[36963], 99.50th=[38011], 99.90th=[38011], 99.95th=[38011], 00:13:48.712 | 99.99th=[51643] 00:13:48.712 write: IOPS=6919, BW=27.0MiB/s (28.3MB/s)(27.2MiB/1008msec); 0 zone resets 00:13:48.712 slat (nsec): min=1529, max=14943k, avg=59671.57, stdev=453011.13 00:13:48.712 clat (usec): min=524, max=36982, avg=8806.47, stdev=4697.96 00:13:48.712 lat (usec): min=532, max=36984, avg=8866.14, stdev=4726.61 00:13:48.712 clat percentiles (usec): 00:13:48.712 | 1.00th=[ 2180], 5.00th=[ 4015], 10.00th=[ 5014], 20.00th=[ 5800], 00:13:48.712 | 30.00th=[ 6783], 40.00th=[ 7242], 50.00th=[ 7767], 60.00th=[ 8225], 00:13:48.712 | 70.00th=[ 8717], 80.00th=[10421], 90.00th=[14877], 95.00th=[20317], 00:13:48.712 | 99.00th=[26084], 99.50th=[29230], 99.90th=[33817], 99.95th=[33817], 00:13:48.712 | 99.99th=[36963] 00:13:48.712 bw ( KiB/s): min=22008, max=32768, per=27.58%, avg=27388.00, stdev=7608.47, samples=2 00:13:48.712 iops : min= 5502, max= 8192, avg=6847.00, stdev=1902.12, samples=2 00:13:48.712 lat (usec) : 750=0.01%, 1000=0.01% 00:13:48.712 lat (msec) : 2=0.41%, 4=2.44%, 10=76.00%, 20=15.24%, 50=5.87% 00:13:48.712 lat (msec) : 100=0.01% 00:13:48.712 cpu : usr=5.96%, sys=6.85%, ctx=399, majf=0, minf=2 00:13:48.712 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:13:48.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:48.712 issued rwts: total=6656,6975,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.712 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:48.712 job3: (groupid=0, jobs=1): err= 0: pid=2255995: Wed Nov 27 07:08:59 2024 00:13:48.712 read: IOPS=3114, BW=12.2MiB/s (12.8MB/s)(12.2MiB/1007msec) 00:13:48.712 slat (nsec): min=1017, max=19488k, avg=129399.85, stdev=1053576.20 00:13:48.712 clat (usec): min=2041, max=56341, avg=16512.36, stdev=9751.46 00:13:48.712 lat (usec): min=3820, max=64115, avg=16641.76, stdev=9855.95 00:13:48.712 clat percentiles (usec): 00:13:48.712 | 1.00th=[ 5669], 5.00th=[ 7373], 10.00th=[ 8586], 20.00th=[ 8979], 00:13:48.712 | 30.00th=[10028], 40.00th=[10814], 50.00th=[12649], 60.00th=[14484], 00:13:48.712 | 70.00th=[17957], 80.00th=[26084], 90.00th=[31851], 95.00th=[35390], 00:13:48.712 | 99.00th=[44827], 99.50th=[49546], 99.90th=[56361], 99.95th=[56361], 00:13:48.712 | 99.99th=[56361] 00:13:48.712 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:13:48.712 slat (nsec): min=1664, max=11043k, avg=150280.71, stdev=804175.17 00:13:48.712 clat (usec): min=1616, max=82459, avg=21143.60, stdev=18238.63 00:13:48.712 lat (usec): min=1634, max=82468, avg=21293.88, stdev=18335.67 00:13:48.712 clat percentiles (usec): 00:13:48.712 | 1.00th=[ 2737], 5.00th=[ 5866], 10.00th=[ 6521], 20.00th=[ 8717], 00:13:48.712 | 30.00th=[11076], 40.00th=[13698], 50.00th=[14222], 60.00th=[14877], 00:13:48.712 | 70.00th=[20055], 80.00th=[30802], 90.00th=[55313], 95.00th=[65799], 00:13:48.712 | 99.00th=[76022], 99.50th=[78119], 99.90th=[82314], 99.95th=[82314], 00:13:48.712 | 99.99th=[82314] 00:13:48.712 bw ( KiB/s): min=12288, max=15872, per=14.18%, avg=14080.00, stdev=2534.27, samples=2 00:13:48.712 iops : min= 3072, max= 3968, avg=3520.00, stdev=633.57, samples=2 00:13:48.712 lat (msec) : 2=0.15%, 4=0.98%, 10=27.92%, 20=42.17%, 50=22.62% 00:13:48.712 lat (msec) : 100=6.16% 00:13:48.712 cpu : usr=2.98%, sys=3.48%, ctx=318, majf=0, minf=1 00:13:48.712 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:48.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:48.712 issued rwts: total=3136,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.712 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:48.712 00:13:48.712 Run status group 0 (all jobs): 00:13:48.712 READ: bw=91.9MiB/s (96.4MB/s), 12.2MiB/s-27.8MiB/s (12.8MB/s-29.1MB/s), io=92.7MiB (97.2MB), run=1007-1009msec 00:13:48.712 WRITE: bw=97.0MiB/s (102MB/s), 13.9MiB/s-28.3MiB/s (14.6MB/s-29.7MB/s), io=97.8MiB (103MB), run=1007-1009msec 00:13:48.712 00:13:48.712 Disk stats (read/write): 00:13:48.712 nvme0n1: ios=5245/5632, merge=0/0, ticks=45621/53926, in_queue=99547, util=97.09% 00:13:48.712 nvme0n2: ios=6649/6656, merge=0/0, ticks=51664/48607, in_queue=100271, util=96.64% 00:13:48.712 nvme0n3: ios=6144/6465, merge=0/0, ticks=42976/42087, in_queue=85063, util=88.40% 00:13:48.712 nvme0n4: ios=2087/2560, merge=0/0, ticks=26350/48437, in_queue=74787, util=97.22% 00:13:48.712 07:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:48.712 07:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2256130 00:13:48.712 07:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:48.712 07:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:48.972 [global] 00:13:48.972 thread=1 00:13:48.972 invalidate=1 00:13:48.972 rw=read 00:13:48.972 time_based=1 00:13:48.972 runtime=10 00:13:48.972 ioengine=libaio 00:13:48.972 direct=1 00:13:48.972 bs=4096 00:13:48.972 iodepth=1 00:13:48.972 norandommap=1 00:13:48.972 numjobs=1 00:13:48.972 00:13:48.972 [job0] 00:13:48.972 filename=/dev/nvme0n1 00:13:48.972 [job1] 00:13:48.972 filename=/dev/nvme0n2 00:13:48.972 [job2] 00:13:48.972 filename=/dev/nvme0n3 00:13:48.972 [job3] 00:13:48.973 filename=/dev/nvme0n4 00:13:48.973 Could not set queue depth (nvme0n1) 00:13:48.973 Could not set queue depth (nvme0n2) 00:13:48.973 Could not set queue depth (nvme0n3) 00:13:48.973 Could not set queue depth (nvme0n4) 00:13:49.234 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:49.234 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:49.234 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:49.234 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:49.234 fio-3.35 00:13:49.234 Starting 4 threads 00:13:51.810 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:52.131 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=258048, buflen=4096 00:13:52.131 fio: pid=2256514, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:52.131 07:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:52.131 07:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:52.131 07:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:52.131 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=3178496, buflen=4096 00:13:52.131 fio: pid=2256509, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:52.393 07:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:52.393 07:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:52.393 fio: io_u error on file /dev/nvme0n1: Input/output error: read offset=1298432, buflen=4096 00:13:52.393 fio: pid=2256463, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:13:52.655 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=10539008, buflen=4096 00:13:52.655 fio: pid=2256482, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:52.655 07:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:52.655 07:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:52.655 00:13:52.655 job0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=2256463: Wed Nov 27 07:09:03 2024 00:13:52.655 read: IOPS=106, BW=424KiB/s (434kB/s)(1268KiB/2990msec) 00:13:52.655 slat (usec): min=6, max=7011, avg=46.64, stdev=391.87 00:13:52.655 clat (usec): min=623, max=42082, avg=9378.52, stdev=16455.21 00:13:52.655 lat (usec): min=649, max=42108, avg=9403.19, stdev=16455.57 00:13:52.655 clat percentiles (usec): 00:13:52.655 | 1.00th=[ 734], 5.00th=[ 799], 10.00th=[ 848], 20.00th=[ 906], 00:13:52.655 | 30.00th=[ 938], 40.00th=[ 955], 50.00th=[ 979], 60.00th=[ 1020], 00:13:52.655 | 70.00th=[ 1057], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:13:52.655 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:52.655 | 99.99th=[42206] 00:13:52.655 bw ( KiB/s): min= 96, max= 1424, per=10.32%, avg=488.00, stdev=590.48, samples=5 00:13:52.655 iops : min= 24, max= 356, avg=122.00, stdev=147.62, samples=5 00:13:52.655 lat (usec) : 750=1.89%, 1000=53.46% 00:13:52.655 lat (msec) : 2=23.58%, 50=20.75% 00:13:52.655 cpu : usr=0.17%, sys=0.47%, ctx=318, majf=0, minf=1 00:13:52.655 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:52.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:52.655 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:52.655 issued rwts: total=318,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:52.655 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:52.655 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2256482: Wed Nov 27 07:09:03 2024 00:13:52.655 read: IOPS=816, BW=3264KiB/s (3343kB/s)(10.1MiB/3153msec) 00:13:52.655 slat (usec): min=6, max=10853, avg=44.49, stdev=401.95 00:13:52.655 clat (usec): min=353, max=41952, avg=1166.06, stdev=3176.71 00:13:52.655 lat (usec): min=378, max=41978, avg=1210.56, stdev=3200.57 00:13:52.655 clat percentiles (usec): 00:13:52.655 | 1.00th=[ 510], 5.00th=[ 701], 10.00th=[ 775], 20.00th=[ 857], 00:13:52.655 | 30.00th=[ 898], 40.00th=[ 930], 50.00th=[ 947], 60.00th=[ 955], 00:13:52.655 | 70.00th=[ 971], 80.00th=[ 979], 90.00th=[ 1004], 95.00th=[ 1020], 00:13:52.655 | 99.00th=[ 1139], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:13:52.655 | 99.99th=[42206] 00:13:52.655 bw ( KiB/s): min= 2192, max= 4233, per=68.74%, avg=3252.17, stdev=1032.99, samples=6 00:13:52.655 iops : min= 548, max= 1058, avg=813.00, stdev=258.20, samples=6 00:13:52.655 lat (usec) : 500=0.85%, 750=7.03%, 1000=81.55% 00:13:52.655 lat (msec) : 2=9.87%, 10=0.04%, 50=0.62% 00:13:52.655 cpu : usr=0.89%, sys=2.86%, ctx=2581, majf=0, minf=2 00:13:52.655 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:52.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:52.655 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:52.655 issued rwts: total=2574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:52.655 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:52.655 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2256509: Wed Nov 27 07:09:03 2024 00:13:52.655 read: IOPS=276, BW=1106KiB/s (1133kB/s)(3104KiB/2806msec) 00:13:52.655 slat (usec): min=8, max=214, avg=25.82, stdev= 7.26 00:13:52.655 clat (usec): min=654, max=43046, avg=3572.07, stdev=9975.38 00:13:52.655 lat (usec): min=680, max=43073, avg=3597.89, stdev=9976.37 00:13:52.655 clat percentiles (usec): 00:13:52.655 | 1.00th=[ 766], 5.00th=[ 824], 10.00th=[ 881], 20.00th=[ 922], 00:13:52.655 | 30.00th=[ 938], 40.00th=[ 955], 50.00th=[ 963], 60.00th=[ 979], 00:13:52.655 | 70.00th=[ 996], 80.00th=[ 1012], 90.00th=[ 1057], 95.00th=[41157], 00:13:52.655 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:13:52.655 | 99.99th=[43254] 00:13:52.655 bw ( KiB/s): min= 88, max= 2104, per=25.96%, avg=1228.80, stdev=1043.19, samples=5 00:13:52.655 iops : min= 22, max= 526, avg=307.20, stdev=260.80, samples=5 00:13:52.655 lat (usec) : 750=0.64%, 1000=72.72% 00:13:52.655 lat (msec) : 2=20.08%, 50=6.44% 00:13:52.655 cpu : usr=0.21%, sys=0.93%, ctx=778, majf=0, minf=2 00:13:52.655 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:52.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:52.655 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:52.655 issued rwts: total=777,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:52.655 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:52.655 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2256514: Wed Nov 27 07:09:03 2024 00:13:52.655 read: IOPS=24, BW=96.8KiB/s (99.1kB/s)(252KiB/2603msec) 00:13:52.655 slat (nsec): min=26640, max=33804, avg=27358.53, stdev=885.65 00:13:52.655 clat (usec): min=1020, max=42293, avg=40931.35, stdev=5131.91 00:13:52.655 lat (usec): min=1053, max=42320, avg=40958.72, stdev=5131.08 00:13:52.655 clat percentiles (usec): 00:13:52.655 | 1.00th=[ 1020], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:52.655 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:13:52.655 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:52.655 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:52.655 | 99.99th=[42206] 00:13:52.655 bw ( KiB/s): min= 96, max= 104, per=2.05%, avg=97.60, stdev= 3.58, samples=5 00:13:52.655 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:13:52.655 lat (msec) : 2=1.56%, 50=96.88% 00:13:52.655 cpu : usr=0.15%, sys=0.00%, ctx=64, majf=0, minf=2 00:13:52.655 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:52.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:52.655 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:52.655 issued rwts: total=64,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:52.655 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:52.655 00:13:52.655 Run status group 0 (all jobs): 00:13:52.655 READ: bw=4731KiB/s (4844kB/s), 96.8KiB/s-3264KiB/s (99.1kB/s-3343kB/s), io=14.6MiB (15.3MB), run=2603-3153msec 00:13:52.655 00:13:52.655 Disk stats (read/write): 00:13:52.655 nvme0n1: ios=313/0, merge=0/0, ticks=2797/0, in_queue=2797, util=94.79% 00:13:52.655 nvme0n2: ios=2527/0, merge=0/0, ticks=2884/0, in_queue=2884, util=94.33% 00:13:52.655 nvme0n3: ios=770/0, merge=0/0, ticks=2524/0, in_queue=2524, util=96.03% 00:13:52.655 nvme0n4: ios=63/0, merge=0/0, ticks=2581/0, in_queue=2581, util=96.46% 00:13:52.655 07:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:52.655 07:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:52.917 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:52.917 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:53.178 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:53.178 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:53.439 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:53.439 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:53.439 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:53.439 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2256130 00:13:53.439 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:53.439 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:53.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.700 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:53.700 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:13:53.700 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:53.700 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:53.700 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:53.700 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:53.700 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:13:53.700 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:53.700 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:53.700 nvmf hotplug test: fio failed as expected 00:13:53.700 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:53.700 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:53.700 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:53.700 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:53.700 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:53.700 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:53.700 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:53.700 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:13:53.700 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:53.700 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:13:53.700 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:53.700 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:53.962 rmmod nvme_tcp 00:13:53.962 rmmod nvme_fabrics 00:13:53.962 rmmod nvme_keyring 00:13:53.962 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:53.962 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:13:53.962 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:13:53.962 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2252612 ']' 00:13:53.962 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2252612 00:13:53.962 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2252612 ']' 00:13:53.962 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2252612 00:13:53.962 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:13:53.962 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:53.962 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2252612 00:13:53.962 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:53.962 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:53.962 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2252612' 00:13:53.962 killing process with pid 2252612 00:13:53.962 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2252612 00:13:53.962 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2252612 00:13:53.962 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:53.962 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:53.962 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:53.962 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:13:53.962 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:13:53.962 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:13:53.962 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:53.962 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:53.962 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:53.962 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:53.962 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:53.962 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.512 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:56.512 00:13:56.512 real 0m29.453s 00:13:56.512 user 2m41.453s 00:13:56.512 sys 0m9.584s 00:13:56.512 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:56.512 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.512 ************************************ 00:13:56.512 END TEST nvmf_fio_target 00:13:56.512 ************************************ 00:13:56.512 07:09:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:56.512 07:09:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:56.512 07:09:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:56.512 07:09:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:56.512 ************************************ 00:13:56.512 START TEST nvmf_bdevio 00:13:56.512 ************************************ 00:13:56.512 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:56.512 * Looking for test storage... 00:13:56.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:56.512 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:56.512 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:13:56.512 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:56.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.513 --rc genhtml_branch_coverage=1 00:13:56.513 --rc genhtml_function_coverage=1 00:13:56.513 --rc genhtml_legend=1 00:13:56.513 --rc geninfo_all_blocks=1 00:13:56.513 --rc geninfo_unexecuted_blocks=1 00:13:56.513 00:13:56.513 ' 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:56.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.513 --rc genhtml_branch_coverage=1 00:13:56.513 --rc genhtml_function_coverage=1 00:13:56.513 --rc genhtml_legend=1 00:13:56.513 --rc geninfo_all_blocks=1 00:13:56.513 --rc geninfo_unexecuted_blocks=1 00:13:56.513 00:13:56.513 ' 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:56.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.513 --rc genhtml_branch_coverage=1 00:13:56.513 --rc genhtml_function_coverage=1 00:13:56.513 --rc genhtml_legend=1 00:13:56.513 --rc geninfo_all_blocks=1 00:13:56.513 --rc geninfo_unexecuted_blocks=1 00:13:56.513 00:13:56.513 ' 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:56.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.513 --rc genhtml_branch_coverage=1 00:13:56.513 --rc genhtml_function_coverage=1 00:13:56.513 --rc genhtml_legend=1 00:13:56.513 --rc geninfo_all_blocks=1 00:13:56.513 --rc geninfo_unexecuted_blocks=1 00:13:56.513 00:13:56.513 ' 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:56.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:56.513 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:56.514 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.514 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:56.514 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.514 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:56.514 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:56.514 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:13:56.514 07:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:04.659 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:04.659 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:04.659 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:04.659 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:04.659 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:04.660 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:04.660 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:04.660 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:04.660 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:04.660 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:04.660 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:04.660 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:04.660 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:04.660 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:04.660 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:04.660 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:04.660 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:04.660 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:04.660 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:04.660 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:04.660 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:04.660 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:04.660 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:04.660 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:04.660 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:04.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:04.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:14:04.660 00:14:04.660 --- 10.0.0.2 ping statistics --- 00:14:04.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.660 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:14:04.660 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:04.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:04.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:14:04.660 00:14:04.660 --- 10.0.0.1 ping statistics --- 00:14:04.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.660 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:14:04.660 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:04.660 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:14:04.660 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:04.660 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:04.660 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:04.660 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:04.660 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:04.660 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:04.660 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:04.660 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:04.660 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:04.660 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:04.660 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:04.660 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2262239 00:14:04.660 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2262239 00:14:04.660 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:04.660 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2262239 ']' 00:14:04.660 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.660 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:04.660 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.660 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:04.660 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:04.660 [2024-11-27 07:09:15.151326] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:14:04.660 [2024-11-27 07:09:15.151394] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.660 [2024-11-27 07:09:15.250073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:04.660 [2024-11-27 07:09:15.302714] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:04.660 [2024-11-27 07:09:15.302765] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:04.660 [2024-11-27 07:09:15.302774] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:04.660 [2024-11-27 07:09:15.302781] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:04.660 [2024-11-27 07:09:15.302787] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:04.660 [2024-11-27 07:09:15.304856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:14:04.660 [2024-11-27 07:09:15.305018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:14:04.660 [2024-11-27 07:09:15.305204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:04.660 [2024-11-27 07:09:15.305204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:14:04.922 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:04.922 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:14:04.922 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:04.922 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:04.922 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:04.922 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:04.922 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:04.922 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.923 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:04.923 [2024-11-27 07:09:16.025878] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:04.923 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.923 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:04.923 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.923 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:04.923 Malloc0 00:14:04.923 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.923 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:04.923 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.923 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:04.923 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.923 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:04.923 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.923 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:04.923 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.923 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:04.923 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.923 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:04.923 [2024-11-27 07:09:16.106076] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:04.923 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.923 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:04.923 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:04.923 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:14:04.923 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:14:04.923 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:04.923 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:04.923 { 00:14:04.923 "params": { 00:14:04.923 "name": "Nvme$subsystem", 00:14:04.923 "trtype": "$TEST_TRANSPORT", 00:14:04.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:04.923 "adrfam": "ipv4", 00:14:04.923 "trsvcid": "$NVMF_PORT", 00:14:04.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:04.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:04.923 "hdgst": ${hdgst:-false}, 00:14:04.923 "ddgst": ${ddgst:-false} 00:14:04.923 }, 00:14:04.923 "method": "bdev_nvme_attach_controller" 00:14:04.923 } 00:14:04.923 EOF 00:14:04.923 )") 00:14:04.923 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:14:04.923 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:14:05.184 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:14:05.184 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:05.184 "params": { 00:14:05.184 "name": "Nvme1", 00:14:05.184 "trtype": "tcp", 00:14:05.184 "traddr": "10.0.0.2", 00:14:05.184 "adrfam": "ipv4", 00:14:05.184 "trsvcid": "4420", 00:14:05.184 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:05.184 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:05.184 "hdgst": false, 00:14:05.184 "ddgst": false 00:14:05.184 }, 00:14:05.184 "method": "bdev_nvme_attach_controller" 00:14:05.184 }' 00:14:05.184 [2024-11-27 07:09:16.163540] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:14:05.184 [2024-11-27 07:09:16.163605] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2262416 ] 00:14:05.184 [2024-11-27 07:09:16.240541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:05.184 [2024-11-27 07:09:16.298027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.184 [2024-11-27 07:09:16.298213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:05.184 [2024-11-27 07:09:16.298244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.445 I/O targets: 00:14:05.445 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:05.445 00:14:05.445 00:14:05.445 CUnit - A unit testing framework for C - Version 2.1-3 00:14:05.445 http://cunit.sourceforge.net/ 00:14:05.445 00:14:05.445 00:14:05.445 Suite: bdevio tests on: Nvme1n1 00:14:05.445 Test: blockdev write read block ...passed 00:14:05.445 Test: blockdev write zeroes read block ...passed 00:14:05.445 Test: blockdev write zeroes read no split ...passed 00:14:05.445 Test: blockdev write zeroes read split ...passed 00:14:05.445 Test: blockdev write zeroes read split partial ...passed 00:14:05.445 Test: blockdev reset ...[2024-11-27 07:09:16.643124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:14:05.445 [2024-11-27 07:09:16.643233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1028970 (9): Bad file descriptor 00:14:05.706 [2024-11-27 07:09:16.746478] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:14:05.706 passed 00:14:05.706 Test: blockdev write read 8 blocks ...passed 00:14:05.706 Test: blockdev write read size > 128k ...passed 00:14:05.706 Test: blockdev write read invalid size ...passed 00:14:05.706 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:05.706 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:05.706 Test: blockdev write read max offset ...passed 00:14:05.706 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:05.706 Test: blockdev writev readv 8 blocks ...passed 00:14:05.706 Test: blockdev writev readv 30 x 1block ...passed 00:14:05.967 Test: blockdev writev readv block ...passed 00:14:05.967 Test: blockdev writev readv size > 128k ...passed 00:14:05.967 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:05.967 Test: blockdev comparev and writev ...[2024-11-27 07:09:16.931967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:05.967 [2024-11-27 07:09:16.932018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:05.967 [2024-11-27 07:09:16.932035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:05.967 [2024-11-27 07:09:16.932044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:05.967 [2024-11-27 07:09:16.932591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:05.967 [2024-11-27 07:09:16.932605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:05.967 [2024-11-27 07:09:16.932619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:05.967 [2024-11-27 07:09:16.932627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:05.967 [2024-11-27 07:09:16.933175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:05.967 [2024-11-27 07:09:16.933187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:05.967 [2024-11-27 07:09:16.933201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:05.967 [2024-11-27 07:09:16.933209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:05.967 [2024-11-27 07:09:16.933721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:05.967 [2024-11-27 07:09:16.933734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:05.967 [2024-11-27 07:09:16.933749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:05.967 [2024-11-27 07:09:16.933757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:05.967 passed 00:14:05.967 Test: blockdev nvme passthru rw ...passed 00:14:05.967 Test: blockdev nvme passthru vendor specific ...[2024-11-27 07:09:17.018015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:05.967 [2024-11-27 07:09:17.018031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:05.967 [2024-11-27 07:09:17.018420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:05.967 [2024-11-27 07:09:17.018432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:05.967 [2024-11-27 07:09:17.018814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:05.967 [2024-11-27 07:09:17.018831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:05.967 [2024-11-27 07:09:17.019209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:05.967 [2024-11-27 07:09:17.019221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:05.967 passed 00:14:05.967 Test: blockdev nvme admin passthru ...passed 00:14:05.967 Test: blockdev copy ...passed 00:14:05.967 00:14:05.967 Run Summary: Type Total Ran Passed Failed Inactive 00:14:05.967 suites 1 1 n/a 0 0 00:14:05.967 tests 23 23 23 0 0 00:14:05.967 asserts 152 152 152 0 n/a 00:14:05.967 00:14:05.967 Elapsed time = 1.227 seconds 00:14:06.228 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:06.228 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.228 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:06.228 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.228 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:06.228 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:06.228 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:06.228 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:14:06.228 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:06.228 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:14:06.228 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:06.228 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:06.228 rmmod nvme_tcp 00:14:06.228 rmmod nvme_fabrics 00:14:06.228 rmmod nvme_keyring 00:14:06.228 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:06.228 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:14:06.228 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:14:06.228 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2262239 ']' 00:14:06.228 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2262239 00:14:06.228 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2262239 ']' 00:14:06.228 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2262239 00:14:06.228 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:14:06.228 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:06.228 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2262239 00:14:06.228 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:14:06.228 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:14:06.228 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2262239' 00:14:06.228 killing process with pid 2262239 00:14:06.228 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2262239 00:14:06.228 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2262239 00:14:06.489 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:06.489 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:06.489 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:06.489 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:14:06.489 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:14:06.489 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:06.489 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:14:06.489 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:06.489 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:06.489 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.489 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:06.489 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.419 07:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:08.680 00:14:08.680 real 0m12.327s 00:14:08.680 user 0m13.163s 00:14:08.680 sys 0m6.376s 00:14:08.680 07:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:08.680 07:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:08.680 ************************************ 00:14:08.680 END TEST nvmf_bdevio 00:14:08.680 ************************************ 00:14:08.680 07:09:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:08.680 00:14:08.680 real 5m5.557s 00:14:08.680 user 11m55.913s 00:14:08.680 sys 1m52.976s 00:14:08.680 07:09:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:08.680 07:09:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:08.680 ************************************ 00:14:08.680 END TEST nvmf_target_core 00:14:08.680 ************************************ 00:14:08.680 07:09:19 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:14:08.680 07:09:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:08.680 07:09:19 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:08.680 07:09:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:08.680 ************************************ 00:14:08.680 START TEST nvmf_target_extra 00:14:08.680 ************************************ 00:14:08.680 07:09:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:14:08.680 * Looking for test storage... 00:14:08.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:14:08.680 07:09:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:08.680 07:09:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:14:08.680 07:09:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:08.941 07:09:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:08.941 07:09:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:08.941 07:09:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:08.941 07:09:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:08.941 07:09:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:14:08.941 07:09:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:14:08.941 07:09:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:14:08.941 07:09:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:14:08.941 07:09:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:14:08.941 07:09:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:14:08.941 07:09:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:14:08.941 07:09:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:08.941 07:09:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:14:08.941 07:09:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:14:08.941 07:09:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:08.941 07:09:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:08.941 07:09:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:14:08.941 07:09:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:14:08.941 07:09:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:08.941 07:09:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:14:08.941 07:09:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:14:08.941 07:09:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:14:08.941 07:09:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:14:08.941 07:09:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:08.941 07:09:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:14:08.941 07:09:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:14:08.941 07:09:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:08.941 07:09:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:08.941 07:09:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:14:08.941 07:09:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:08.941 07:09:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:08.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.942 --rc genhtml_branch_coverage=1 00:14:08.942 --rc genhtml_function_coverage=1 00:14:08.942 --rc genhtml_legend=1 00:14:08.942 --rc geninfo_all_blocks=1 00:14:08.942 --rc geninfo_unexecuted_blocks=1 00:14:08.942 00:14:08.942 ' 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:08.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.942 --rc genhtml_branch_coverage=1 00:14:08.942 --rc genhtml_function_coverage=1 00:14:08.942 --rc genhtml_legend=1 00:14:08.942 --rc geninfo_all_blocks=1 00:14:08.942 --rc geninfo_unexecuted_blocks=1 00:14:08.942 00:14:08.942 ' 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:08.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.942 --rc genhtml_branch_coverage=1 00:14:08.942 --rc genhtml_function_coverage=1 00:14:08.942 --rc genhtml_legend=1 00:14:08.942 --rc geninfo_all_blocks=1 00:14:08.942 --rc geninfo_unexecuted_blocks=1 00:14:08.942 00:14:08.942 ' 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:08.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.942 --rc genhtml_branch_coverage=1 00:14:08.942 --rc genhtml_function_coverage=1 00:14:08.942 --rc genhtml_legend=1 00:14:08.942 --rc geninfo_all_blocks=1 00:14:08.942 --rc geninfo_unexecuted_blocks=1 00:14:08.942 00:14:08.942 ' 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:08.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:08.942 07:09:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:08.942 ************************************ 00:14:08.942 START TEST nvmf_example 00:14:08.942 ************************************ 00:14:08.942 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:14:08.942 * Looking for test storage... 00:14:08.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:09.205 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:09.205 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:14:09.205 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:09.205 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:09.205 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:09.205 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:09.205 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:09.205 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:14:09.205 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:14:09.205 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:14:09.205 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:14:09.205 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:14:09.205 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:14:09.205 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:14:09.205 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:09.205 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:14:09.205 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:14:09.205 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:09.205 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:09.205 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:14:09.205 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:14:09.205 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:09.205 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:14:09.205 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:14:09.205 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:14:09.205 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:14:09.205 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:09.205 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:14:09.205 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:14:09.205 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:09.205 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:09.205 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:14:09.205 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:09.205 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:09.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.205 --rc genhtml_branch_coverage=1 00:14:09.205 --rc genhtml_function_coverage=1 00:14:09.205 --rc genhtml_legend=1 00:14:09.205 --rc geninfo_all_blocks=1 00:14:09.205 --rc geninfo_unexecuted_blocks=1 00:14:09.205 00:14:09.205 ' 00:14:09.205 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:09.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.205 --rc genhtml_branch_coverage=1 00:14:09.205 --rc genhtml_function_coverage=1 00:14:09.205 --rc genhtml_legend=1 00:14:09.205 --rc geninfo_all_blocks=1 00:14:09.205 --rc geninfo_unexecuted_blocks=1 00:14:09.205 00:14:09.205 ' 00:14:09.205 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:09.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.206 --rc genhtml_branch_coverage=1 00:14:09.206 --rc genhtml_function_coverage=1 00:14:09.206 --rc genhtml_legend=1 00:14:09.206 --rc geninfo_all_blocks=1 00:14:09.206 --rc geninfo_unexecuted_blocks=1 00:14:09.206 00:14:09.206 ' 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:09.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.206 --rc genhtml_branch_coverage=1 00:14:09.206 --rc genhtml_function_coverage=1 00:14:09.206 --rc genhtml_legend=1 00:14:09.206 --rc geninfo_all_blocks=1 00:14:09.206 --rc geninfo_unexecuted_blocks=1 00:14:09.206 00:14:09.206 ' 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:09.206 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:09.206 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:09.207 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:14:09.207 07:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:17.353 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:17.353 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:14:17.353 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:17.353 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:17.353 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:17.353 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:17.353 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:17.353 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:14:17.353 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:17.353 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:14:17.353 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:14:17.353 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:14:17.353 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:14:17.353 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:14:17.353 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:14:17.353 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:17.353 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:17.353 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:17.353 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:17.353 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:17.353 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:17.353 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:17.353 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:17.353 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:17.353 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:17.353 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:17.353 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:17.353 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:17.353 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:17.353 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:17.353 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:17.353 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:17.353 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:17.353 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:17.353 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:17.354 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:17.354 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:17.354 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:17.354 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:17.354 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:17.354 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:14:17.354 00:14:17.354 --- 10.0.0.2 ping statistics --- 00:14:17.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.354 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:17.354 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:17.354 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:14:17.354 00:14:17.354 --- 10.0.0.1 ping statistics --- 00:14:17.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.354 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:14:17.354 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:14:17.355 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2267001 00:14:17.355 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:17.355 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:14:17.355 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2267001 00:14:17.355 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2267001 ']' 00:14:17.355 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.355 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:17.355 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.355 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:17.355 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:17.616 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:17.616 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:14:17.616 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:14:17.616 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:17.616 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:17.616 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:17.616 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.616 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:17.616 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.616 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:14:17.616 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.616 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:17.878 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.878 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:14:17.878 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:17.878 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.878 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:17.878 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.878 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:14:17.878 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:17.878 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.878 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:17.878 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.878 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:17.878 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.878 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:17.878 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.878 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:14:17.878 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:27.887 Initializing NVMe Controllers 00:14:27.887 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:27.887 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:27.887 Initialization complete. Launching workers. 00:14:27.887 ======================================================== 00:14:27.887 Latency(us) 00:14:27.887 Device Information : IOPS MiB/s Average min max 00:14:27.887 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18789.99 73.40 3405.48 610.89 16211.49 00:14:27.887 ======================================================== 00:14:27.887 Total : 18789.99 73.40 3405.48 610.89 16211.49 00:14:27.887 00:14:27.887 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:14:27.887 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:14:27.887 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:27.887 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:14:28.149 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:28.149 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:14:28.149 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:28.149 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:28.149 rmmod nvme_tcp 00:14:28.149 rmmod nvme_fabrics 00:14:28.149 rmmod nvme_keyring 00:14:28.149 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:28.149 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:14:28.149 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:14:28.149 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2267001 ']' 00:14:28.149 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2267001 00:14:28.149 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2267001 ']' 00:14:28.149 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2267001 00:14:28.149 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:14:28.149 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:28.149 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2267001 00:14:28.149 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:14:28.149 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:14:28.149 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2267001' 00:14:28.149 killing process with pid 2267001 00:14:28.149 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2267001 00:14:28.149 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2267001 00:14:28.149 nvmf threads initialize successfully 00:14:28.149 bdev subsystem init successfully 00:14:28.149 created a nvmf target service 00:14:28.149 create targets's poll groups done 00:14:28.149 all subsystems of target started 00:14:28.149 nvmf target is running 00:14:28.149 all subsystems of target stopped 00:14:28.149 destroy targets's poll groups done 00:14:28.149 destroyed the nvmf target service 00:14:28.149 bdev subsystem finish successfully 00:14:28.149 nvmf threads destroy successfully 00:14:28.150 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:28.150 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:28.150 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:28.150 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:14:28.150 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:14:28.150 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:28.150 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:14:28.150 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:28.150 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:28.150 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.150 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:28.150 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:30.701 00:14:30.701 real 0m21.419s 00:14:30.701 user 0m46.423s 00:14:30.701 sys 0m7.062s 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:30.701 ************************************ 00:14:30.701 END TEST nvmf_example 00:14:30.701 ************************************ 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:30.701 ************************************ 00:14:30.701 START TEST nvmf_filesystem 00:14:30.701 ************************************ 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:14:30.701 * Looking for test storage... 00:14:30.701 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:30.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.701 --rc genhtml_branch_coverage=1 00:14:30.701 --rc genhtml_function_coverage=1 00:14:30.701 --rc genhtml_legend=1 00:14:30.701 --rc geninfo_all_blocks=1 00:14:30.701 --rc geninfo_unexecuted_blocks=1 00:14:30.701 00:14:30.701 ' 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:30.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.701 --rc genhtml_branch_coverage=1 00:14:30.701 --rc genhtml_function_coverage=1 00:14:30.701 --rc genhtml_legend=1 00:14:30.701 --rc geninfo_all_blocks=1 00:14:30.701 --rc geninfo_unexecuted_blocks=1 00:14:30.701 00:14:30.701 ' 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:30.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.701 --rc genhtml_branch_coverage=1 00:14:30.701 --rc genhtml_function_coverage=1 00:14:30.701 --rc genhtml_legend=1 00:14:30.701 --rc geninfo_all_blocks=1 00:14:30.701 --rc geninfo_unexecuted_blocks=1 00:14:30.701 00:14:30.701 ' 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:30.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.701 --rc genhtml_branch_coverage=1 00:14:30.701 --rc genhtml_function_coverage=1 00:14:30.701 --rc genhtml_legend=1 00:14:30.701 --rc geninfo_all_blocks=1 00:14:30.701 --rc geninfo_unexecuted_blocks=1 00:14:30.701 00:14:30.701 ' 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:14:30.701 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:14:30.702 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:14:30.703 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:14:30.703 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:14:30.703 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:14:30.703 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:14:30.703 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:14:30.703 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:14:30.703 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:14:30.703 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:14:30.703 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:14:30.703 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:14:30.703 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:14:30.703 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:14:30.703 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:14:30.703 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:14:30.703 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:14:30.703 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:14:30.703 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:14:30.703 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:30.703 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:14:30.703 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:30.703 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:14:30.703 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:14:30.703 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:14:30.703 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:14:30.703 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:14:30.703 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:14:30.703 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:14:30.703 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:14:30.703 #define SPDK_CONFIG_H 00:14:30.703 #define SPDK_CONFIG_AIO_FSDEV 1 00:14:30.703 #define SPDK_CONFIG_APPS 1 00:14:30.703 #define SPDK_CONFIG_ARCH native 00:14:30.703 #undef SPDK_CONFIG_ASAN 00:14:30.703 #undef SPDK_CONFIG_AVAHI 00:14:30.703 #undef SPDK_CONFIG_CET 00:14:30.703 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:14:30.703 #define SPDK_CONFIG_COVERAGE 1 00:14:30.703 #define SPDK_CONFIG_CROSS_PREFIX 00:14:30.703 #undef SPDK_CONFIG_CRYPTO 00:14:30.703 #undef SPDK_CONFIG_CRYPTO_MLX5 00:14:30.703 #undef SPDK_CONFIG_CUSTOMOCF 00:14:30.703 #undef SPDK_CONFIG_DAOS 00:14:30.703 #define SPDK_CONFIG_DAOS_DIR 00:14:30.703 #define SPDK_CONFIG_DEBUG 1 00:14:30.703 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:14:30.703 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:14:30.703 #define SPDK_CONFIG_DPDK_INC_DIR 00:14:30.703 #define SPDK_CONFIG_DPDK_LIB_DIR 00:14:30.703 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:14:30.703 #undef SPDK_CONFIG_DPDK_UADK 00:14:30.703 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:14:30.703 #define SPDK_CONFIG_EXAMPLES 1 00:14:30.703 #undef SPDK_CONFIG_FC 00:14:30.703 #define SPDK_CONFIG_FC_PATH 00:14:30.703 #define SPDK_CONFIG_FIO_PLUGIN 1 00:14:30.703 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:14:30.703 #define SPDK_CONFIG_FSDEV 1 00:14:30.703 #undef SPDK_CONFIG_FUSE 00:14:30.703 #undef SPDK_CONFIG_FUZZER 00:14:30.703 #define SPDK_CONFIG_FUZZER_LIB 00:14:30.703 #undef SPDK_CONFIG_GOLANG 00:14:30.703 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:14:30.703 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:14:30.703 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:14:30.703 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:14:30.703 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:14:30.703 #undef SPDK_CONFIG_HAVE_LIBBSD 00:14:30.703 #undef SPDK_CONFIG_HAVE_LZ4 00:14:30.703 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:14:30.703 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:14:30.703 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:14:30.703 #define SPDK_CONFIG_IDXD 1 00:14:30.703 #define SPDK_CONFIG_IDXD_KERNEL 1 00:14:30.703 #undef SPDK_CONFIG_IPSEC_MB 00:14:30.703 #define SPDK_CONFIG_IPSEC_MB_DIR 00:14:30.703 #define SPDK_CONFIG_ISAL 1 00:14:30.703 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:14:30.703 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:14:30.703 #define SPDK_CONFIG_LIBDIR 00:14:30.703 #undef SPDK_CONFIG_LTO 00:14:30.703 #define SPDK_CONFIG_MAX_LCORES 128 00:14:30.703 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:14:30.703 #define SPDK_CONFIG_NVME_CUSE 1 00:14:30.703 #undef SPDK_CONFIG_OCF 00:14:30.703 #define SPDK_CONFIG_OCF_PATH 00:14:30.703 #define SPDK_CONFIG_OPENSSL_PATH 00:14:30.703 #undef SPDK_CONFIG_PGO_CAPTURE 00:14:30.703 #define SPDK_CONFIG_PGO_DIR 00:14:30.703 #undef SPDK_CONFIG_PGO_USE 00:14:30.703 #define SPDK_CONFIG_PREFIX /usr/local 00:14:30.703 #undef SPDK_CONFIG_RAID5F 00:14:30.703 #undef SPDK_CONFIG_RBD 00:14:30.703 #define SPDK_CONFIG_RDMA 1 00:14:30.703 #define SPDK_CONFIG_RDMA_PROV verbs 00:14:30.703 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:14:30.703 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:14:30.703 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:14:30.703 #define SPDK_CONFIG_SHARED 1 00:14:30.703 #undef SPDK_CONFIG_SMA 00:14:30.703 #define SPDK_CONFIG_TESTS 1 00:14:30.703 #undef SPDK_CONFIG_TSAN 00:14:30.703 #define SPDK_CONFIG_UBLK 1 00:14:30.703 #define SPDK_CONFIG_UBSAN 1 00:14:30.703 #undef SPDK_CONFIG_UNIT_TESTS 00:14:30.703 #undef SPDK_CONFIG_URING 00:14:30.703 #define SPDK_CONFIG_URING_PATH 00:14:30.703 #undef SPDK_CONFIG_URING_ZNS 00:14:30.703 #undef SPDK_CONFIG_USDT 00:14:30.703 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:14:30.703 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:14:30.703 #define SPDK_CONFIG_VFIO_USER 1 00:14:30.703 #define SPDK_CONFIG_VFIO_USER_DIR 00:14:30.703 #define SPDK_CONFIG_VHOST 1 00:14:30.703 #define SPDK_CONFIG_VIRTIO 1 00:14:30.703 #undef SPDK_CONFIG_VTUNE 00:14:30.703 #define SPDK_CONFIG_VTUNE_DIR 00:14:30.703 #define SPDK_CONFIG_WERROR 1 00:14:30.703 #define SPDK_CONFIG_WPDK_DIR 00:14:30.703 #undef SPDK_CONFIG_XNVME 00:14:30.703 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:14:30.703 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:14:30.703 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:30.703 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:14:30.703 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.703 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.703 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.703 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:14:30.704 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:30.705 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:14:30.706 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2269810 ]] 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2269810 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.cjKHqz 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.cjKHqz/tests/target /tmp/spdk.cjKHqz 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=118250426368 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356509184 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11106082816 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64666886144 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678252544 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847934976 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871302656 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23367680 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=216064 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=287744 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64677163008 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678256640 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=1093632 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935634944 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935647232 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:14:30.707 * Looking for test storage... 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:14:30.707 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:30.708 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:14:30.708 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:14:30.708 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=118250426368 00:14:30.970 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:14:30.970 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:14:30.970 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:14:30.970 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:14:30.970 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:14:30.970 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=13320675328 00:14:30.970 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:14:30.970 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:30.970 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:30.970 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:30.970 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:30.970 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:14:30.970 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:14:30.970 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:14:30.970 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:14:30.970 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:14:30.970 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:14:30.970 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:14:30.970 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:14:30.970 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:14:30.970 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:14:30.970 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:14:30.970 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:14:30.970 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:14:30.970 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:14:30.970 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:14:30.970 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:30.970 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:14:30.970 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:30.970 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:30.970 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:30.970 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:30.970 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:30.970 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:14:30.970 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:14:30.971 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:14:30.971 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:14:30.971 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:14:30.971 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:14:30.971 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:14:30.971 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:30.971 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:14:30.971 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:14:30.971 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:30.971 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:30.971 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:14:30.971 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:14:30.971 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:30.971 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:14:30.971 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:14:30.971 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:14:30.971 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:14:30.971 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:30.971 07:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:30.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.971 --rc genhtml_branch_coverage=1 00:14:30.971 --rc genhtml_function_coverage=1 00:14:30.971 --rc genhtml_legend=1 00:14:30.971 --rc geninfo_all_blocks=1 00:14:30.971 --rc geninfo_unexecuted_blocks=1 00:14:30.971 00:14:30.971 ' 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:30.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.971 --rc genhtml_branch_coverage=1 00:14:30.971 --rc genhtml_function_coverage=1 00:14:30.971 --rc genhtml_legend=1 00:14:30.971 --rc geninfo_all_blocks=1 00:14:30.971 --rc geninfo_unexecuted_blocks=1 00:14:30.971 00:14:30.971 ' 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:30.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.971 --rc genhtml_branch_coverage=1 00:14:30.971 --rc genhtml_function_coverage=1 00:14:30.971 --rc genhtml_legend=1 00:14:30.971 --rc geninfo_all_blocks=1 00:14:30.971 --rc geninfo_unexecuted_blocks=1 00:14:30.971 00:14:30.971 ' 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:30.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.971 --rc genhtml_branch_coverage=1 00:14:30.971 --rc genhtml_function_coverage=1 00:14:30.971 --rc genhtml_legend=1 00:14:30.971 --rc geninfo_all_blocks=1 00:14:30.971 --rc geninfo_unexecuted_blocks=1 00:14:30.971 00:14:30.971 ' 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:30.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:14:30.971 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:30.972 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:14:30.972 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:30.972 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:30.972 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:30.972 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:30.972 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:30.972 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.972 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:30.972 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.972 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:30.972 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:30.972 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:14:30.972 07:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:39.121 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:39.121 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:39.121 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:39.121 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:39.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:39.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:14:39.121 00:14:39.121 --- 10.0.0.2 ping statistics --- 00:14:39.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.121 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:39.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:39.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:14:39.121 00:14:39.121 --- 10.0.0.1 ping statistics --- 00:14:39.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.121 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:39.121 ************************************ 00:14:39.121 START TEST nvmf_filesystem_no_in_capsule 00:14:39.121 ************************************ 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2273711 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2273711 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2273711 ']' 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:39.121 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.122 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:39.122 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:39.122 [2024-11-27 07:09:49.754466] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:14:39.122 [2024-11-27 07:09:49.754534] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:39.122 [2024-11-27 07:09:49.856814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:39.122 [2024-11-27 07:09:49.910042] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:39.122 [2024-11-27 07:09:49.910098] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:39.122 [2024-11-27 07:09:49.910107] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:39.122 [2024-11-27 07:09:49.910115] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:39.122 [2024-11-27 07:09:49.910122] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:39.122 [2024-11-27 07:09:49.912468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.122 [2024-11-27 07:09:49.912627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:39.122 [2024-11-27 07:09:49.912790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:39.122 [2024-11-27 07:09:49.912791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.383 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:39.383 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:14:39.383 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:39.383 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:39.383 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:39.645 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.645 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:14:39.645 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:39.645 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.645 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:39.645 [2024-11-27 07:09:50.634837] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:39.645 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.645 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:14:39.645 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.645 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:39.645 Malloc1 00:14:39.645 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.645 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:39.645 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.645 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:39.645 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.645 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:39.645 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.645 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:39.646 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.646 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:39.646 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.646 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:39.646 [2024-11-27 07:09:50.794401] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:39.646 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.646 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:14:39.646 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:14:39.646 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:14:39.646 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:14:39.646 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:14:39.646 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:14:39.646 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.646 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:39.646 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.646 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:14:39.646 { 00:14:39.646 "name": "Malloc1", 00:14:39.646 "aliases": [ 00:14:39.646 "35811a02-86ec-43c9-9b47-48a76f4417d9" 00:14:39.646 ], 00:14:39.646 "product_name": "Malloc disk", 00:14:39.646 "block_size": 512, 00:14:39.646 "num_blocks": 1048576, 00:14:39.646 "uuid": "35811a02-86ec-43c9-9b47-48a76f4417d9", 00:14:39.646 "assigned_rate_limits": { 00:14:39.646 "rw_ios_per_sec": 0, 00:14:39.646 "rw_mbytes_per_sec": 0, 00:14:39.646 "r_mbytes_per_sec": 0, 00:14:39.646 "w_mbytes_per_sec": 0 00:14:39.646 }, 00:14:39.646 "claimed": true, 00:14:39.646 "claim_type": "exclusive_write", 00:14:39.646 "zoned": false, 00:14:39.646 "supported_io_types": { 00:14:39.646 "read": true, 00:14:39.646 "write": true, 00:14:39.646 "unmap": true, 00:14:39.646 "flush": true, 00:14:39.646 "reset": true, 00:14:39.646 "nvme_admin": false, 00:14:39.646 "nvme_io": false, 00:14:39.646 "nvme_io_md": false, 00:14:39.646 "write_zeroes": true, 00:14:39.646 "zcopy": true, 00:14:39.646 "get_zone_info": false, 00:14:39.646 "zone_management": false, 00:14:39.646 "zone_append": false, 00:14:39.646 "compare": false, 00:14:39.646 "compare_and_write": false, 00:14:39.646 "abort": true, 00:14:39.646 "seek_hole": false, 00:14:39.646 "seek_data": false, 00:14:39.646 "copy": true, 00:14:39.646 "nvme_iov_md": false 00:14:39.646 }, 00:14:39.646 "memory_domains": [ 00:14:39.646 { 00:14:39.646 "dma_device_id": "system", 00:14:39.646 "dma_device_type": 1 00:14:39.646 }, 00:14:39.646 { 00:14:39.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.646 "dma_device_type": 2 00:14:39.646 } 00:14:39.646 ], 00:14:39.646 "driver_specific": {} 00:14:39.646 } 00:14:39.646 ]' 00:14:39.646 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:14:39.908 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:14:39.908 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:14:39.908 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:14:39.908 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:14:39.908 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:14:39.908 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:14:39.908 07:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:41.295 07:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:14:41.295 07:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:14:41.295 07:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:41.295 07:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:41.295 07:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:14:43.842 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:43.842 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:43.842 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:43.842 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:43.842 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:43.842 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:14:43.842 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:14:43.842 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:14:43.842 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:14:43.842 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:14:43.842 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:43.842 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:43.842 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:14:43.842 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:14:43.842 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:14:43.842 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:14:43.842 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:14:43.842 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:14:44.102 07:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:14:45.069 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:14:45.069 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:14:45.069 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:45.069 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:45.069 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:45.069 ************************************ 00:14:45.069 START TEST filesystem_ext4 00:14:45.069 ************************************ 00:14:45.069 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:14:45.069 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:14:45.069 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:45.069 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:14:45.069 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:14:45.069 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:14:45.069 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:14:45.069 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:14:45.069 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:14:45.069 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:14:45.069 07:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:14:45.069 mke2fs 1.47.0 (5-Feb-2023) 00:14:45.069 Discarding device blocks: 0/522240 done 00:14:45.069 Creating filesystem with 522240 1k blocks and 130560 inodes 00:14:45.069 Filesystem UUID: da3486d2-564e-440a-8edd-18eac04e2fad 00:14:45.069 Superblock backups stored on blocks: 00:14:45.069 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:14:45.069 00:14:45.069 Allocating group tables: 0/64 done 00:14:45.069 Writing inode tables: 0/64 done 00:14:48.368 Creating journal (8192 blocks): done 00:14:50.142 Writing superblocks and filesystem accounting information: 0/6426/64 done 00:14:50.143 00:14:50.143 07:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:14:50.143 07:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:56.726 07:10:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:56.726 07:10:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:14:56.726 07:10:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:56.726 07:10:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:14:56.726 07:10:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:14:56.726 07:10:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:56.726 07:10:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2273711 00:14:56.726 07:10:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:56.726 07:10:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:56.726 07:10:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:56.726 07:10:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:56.726 00:14:56.726 real 0m10.847s 00:14:56.726 user 0m0.022s 00:14:56.726 sys 0m0.090s 00:14:56.726 07:10:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:56.726 07:10:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:14:56.726 ************************************ 00:14:56.726 END TEST filesystem_ext4 00:14:56.726 ************************************ 00:14:56.726 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:14:56.726 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:56.726 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:56.726 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:56.726 ************************************ 00:14:56.726 START TEST filesystem_btrfs 00:14:56.726 ************************************ 00:14:56.726 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:14:56.727 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:14:56.727 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:56.727 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:14:56.727 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:14:56.727 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:14:56.727 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:14:56.727 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:14:56.727 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:14:56.727 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:14:56.727 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:14:56.727 btrfs-progs v6.8.1 00:14:56.727 See https://btrfs.readthedocs.io for more information. 00:14:56.727 00:14:56.727 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:14:56.727 NOTE: several default settings have changed in version 5.15, please make sure 00:14:56.727 this does not affect your deployments: 00:14:56.727 - DUP for metadata (-m dup) 00:14:56.727 - enabled no-holes (-O no-holes) 00:14:56.727 - enabled free-space-tree (-R free-space-tree) 00:14:56.727 00:14:56.727 Label: (null) 00:14:56.727 UUID: 580a5ac8-0c00-4cdf-9270-bd61ece255f4 00:14:56.727 Node size: 16384 00:14:56.727 Sector size: 4096 (CPU page size: 4096) 00:14:56.727 Filesystem size: 510.00MiB 00:14:56.727 Block group profiles: 00:14:56.727 Data: single 8.00MiB 00:14:56.727 Metadata: DUP 32.00MiB 00:14:56.727 System: DUP 8.00MiB 00:14:56.727 SSD detected: yes 00:14:56.727 Zoned device: no 00:14:56.727 Features: extref, skinny-metadata, no-holes, free-space-tree 00:14:56.727 Checksum: crc32c 00:14:56.727 Number of devices: 1 00:14:56.727 Devices: 00:14:56.727 ID SIZE PATH 00:14:56.727 1 510.00MiB /dev/nvme0n1p1 00:14:56.727 00:14:56.727 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:14:56.727 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:56.727 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:56.727 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:14:56.727 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:56.727 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:14:56.727 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:14:56.727 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:56.727 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2273711 00:14:56.727 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:56.727 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:56.727 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:56.727 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:56.727 00:14:56.727 real 0m0.778s 00:14:56.727 user 0m0.023s 00:14:56.727 sys 0m0.128s 00:14:56.727 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:56.727 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:14:56.727 ************************************ 00:14:56.727 END TEST filesystem_btrfs 00:14:56.727 ************************************ 00:14:56.727 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:14:56.727 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:56.727 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:56.727 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:56.988 ************************************ 00:14:56.988 START TEST filesystem_xfs 00:14:56.988 ************************************ 00:14:56.988 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:14:56.988 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:14:56.988 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:56.988 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:14:56.988 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:14:56.988 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:14:56.988 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:14:56.988 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:14:56.988 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:14:56.988 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:14:56.988 07:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:14:56.988 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:14:56.988 = sectsz=512 attr=2, projid32bit=1 00:14:56.988 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:56.988 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:56.988 data = bsize=4096 blocks=130560, imaxpct=25 00:14:56.988 = sunit=0 swidth=0 blks 00:14:56.988 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:56.988 log =internal log bsize=4096 blocks=16384, version=2 00:14:56.988 = sectsz=512 sunit=0 blks, lazy-count=1 00:14:56.988 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:57.932 Discarding blocks...Done. 00:14:57.932 07:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:14:57.932 07:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:59.845 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:59.845 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:14:59.845 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:59.845 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:14:59.845 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:14:59.845 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:59.845 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2273711 00:14:59.845 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:59.846 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:59.846 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:59.846 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:59.846 00:14:59.846 real 0m2.752s 00:14:59.846 user 0m0.022s 00:14:59.846 sys 0m0.084s 00:14:59.846 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:59.846 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:59.846 ************************************ 00:14:59.846 END TEST filesystem_xfs 00:14:59.846 ************************************ 00:14:59.846 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:59.846 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:14:59.846 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:59.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.846 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:59.846 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:14:59.846 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:59.846 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:59.846 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:59.846 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:59.846 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:14:59.846 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:59.846 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.846 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:59.846 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.846 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:59.846 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2273711 00:14:59.846 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2273711 ']' 00:14:59.846 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2273711 00:14:59.846 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:14:59.846 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:59.846 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2273711 00:14:59.846 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:59.846 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:59.846 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2273711' 00:14:59.846 killing process with pid 2273711 00:14:59.846 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2273711 00:14:59.846 07:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2273711 00:15:00.106 07:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:15:00.106 00:15:00.106 real 0m21.444s 00:15:00.106 user 1m24.664s 00:15:00.106 sys 0m1.558s 00:15:00.106 07:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:00.106 07:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:00.106 ************************************ 00:15:00.106 END TEST nvmf_filesystem_no_in_capsule 00:15:00.106 ************************************ 00:15:00.106 07:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:15:00.106 07:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:00.106 07:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:00.106 07:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:00.106 ************************************ 00:15:00.106 START TEST nvmf_filesystem_in_capsule 00:15:00.106 ************************************ 00:15:00.106 07:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:15:00.106 07:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:15:00.106 07:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:15:00.106 07:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:00.106 07:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:00.106 07:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:00.106 07:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2278033 00:15:00.106 07:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2278033 00:15:00.106 07:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:00.106 07:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2278033 ']' 00:15:00.106 07:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.106 07:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:00.106 07:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.106 07:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:00.106 07:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:00.106 [2024-11-27 07:10:11.273416] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:15:00.106 [2024-11-27 07:10:11.273454] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:00.367 [2024-11-27 07:10:11.353915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:00.367 [2024-11-27 07:10:11.384309] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:00.367 [2024-11-27 07:10:11.384337] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:00.367 [2024-11-27 07:10:11.384343] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:00.367 [2024-11-27 07:10:11.384348] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:00.367 [2024-11-27 07:10:11.384353] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:00.367 [2024-11-27 07:10:11.385772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:00.367 [2024-11-27 07:10:11.385927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:00.367 [2024-11-27 07:10:11.385948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:00.367 [2024-11-27 07:10:11.385950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.939 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:00.939 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:15:00.939 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:00.939 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:00.939 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:00.939 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:00.939 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:15:00.939 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:15:00.939 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.939 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:00.939 [2024-11-27 07:10:12.103864] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:00.939 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.939 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:15:00.939 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.939 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:01.200 Malloc1 00:15:01.200 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.200 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:01.200 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.200 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:01.200 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.200 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:01.200 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.200 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:01.200 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.201 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:01.201 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.201 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:01.201 [2024-11-27 07:10:12.242958] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:01.201 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.201 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:15:01.201 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:15:01.201 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:01.201 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:15:01.201 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:15:01.201 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:15:01.201 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.201 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:01.201 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.201 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:01.201 { 00:15:01.201 "name": "Malloc1", 00:15:01.201 "aliases": [ 00:15:01.201 "06950c5c-c8f8-4f94-841a-235aa7d65178" 00:15:01.201 ], 00:15:01.201 "product_name": "Malloc disk", 00:15:01.201 "block_size": 512, 00:15:01.201 "num_blocks": 1048576, 00:15:01.201 "uuid": "06950c5c-c8f8-4f94-841a-235aa7d65178", 00:15:01.201 "assigned_rate_limits": { 00:15:01.201 "rw_ios_per_sec": 0, 00:15:01.201 "rw_mbytes_per_sec": 0, 00:15:01.201 "r_mbytes_per_sec": 0, 00:15:01.201 "w_mbytes_per_sec": 0 00:15:01.201 }, 00:15:01.201 "claimed": true, 00:15:01.201 "claim_type": "exclusive_write", 00:15:01.201 "zoned": false, 00:15:01.201 "supported_io_types": { 00:15:01.201 "read": true, 00:15:01.201 "write": true, 00:15:01.201 "unmap": true, 00:15:01.201 "flush": true, 00:15:01.201 "reset": true, 00:15:01.201 "nvme_admin": false, 00:15:01.201 "nvme_io": false, 00:15:01.201 "nvme_io_md": false, 00:15:01.201 "write_zeroes": true, 00:15:01.201 "zcopy": true, 00:15:01.201 "get_zone_info": false, 00:15:01.201 "zone_management": false, 00:15:01.201 "zone_append": false, 00:15:01.201 "compare": false, 00:15:01.201 "compare_and_write": false, 00:15:01.201 "abort": true, 00:15:01.201 "seek_hole": false, 00:15:01.201 "seek_data": false, 00:15:01.201 "copy": true, 00:15:01.201 "nvme_iov_md": false 00:15:01.201 }, 00:15:01.201 "memory_domains": [ 00:15:01.201 { 00:15:01.201 "dma_device_id": "system", 00:15:01.201 "dma_device_type": 1 00:15:01.201 }, 00:15:01.201 { 00:15:01.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.201 "dma_device_type": 2 00:15:01.201 } 00:15:01.201 ], 00:15:01.201 "driver_specific": {} 00:15:01.201 } 00:15:01.201 ]' 00:15:01.201 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:01.201 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:15:01.201 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:01.201 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:15:01.201 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:15:01.201 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:15:01.201 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:15:01.201 07:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:03.258 07:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:15:03.258 07:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:15:03.258 07:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:03.258 07:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:03.258 07:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:15:05.232 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:05.232 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:05.232 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:05.232 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:05.232 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:05.232 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:15:05.232 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:15:05.232 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:15:05.232 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:15:05.232 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:15:05.232 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:15:05.232 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:05.232 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:15:05.232 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:15:05.232 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:15:05.232 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:15:05.232 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:15:05.232 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:15:06.172 07:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:15:07.113 07:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:15:07.113 07:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:15:07.113 07:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:07.113 07:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:07.113 07:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:07.113 ************************************ 00:15:07.113 START TEST filesystem_in_capsule_ext4 00:15:07.113 ************************************ 00:15:07.113 07:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:15:07.113 07:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:15:07.113 07:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:07.113 07:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:15:07.113 07:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:15:07.113 07:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:15:07.113 07:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:15:07.113 07:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:15:07.113 07:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:15:07.113 07:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:15:07.113 07:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:15:07.113 mke2fs 1.47.0 (5-Feb-2023) 00:15:07.113 Discarding device blocks: 0/522240 done 00:15:07.113 Creating filesystem with 522240 1k blocks and 130560 inodes 00:15:07.113 Filesystem UUID: 240810fd-b983-45bb-a630-735623eb1d55 00:15:07.113 Superblock backups stored on blocks: 00:15:07.113 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:15:07.113 00:15:07.113 Allocating group tables: 0/64 done 00:15:07.113 Writing inode tables: 0/64 done 00:15:08.054 Creating journal (8192 blocks): done 00:15:08.054 Writing superblocks and filesystem accounting information: 0/64 done 00:15:08.054 00:15:08.054 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:15:08.055 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:13.342 07:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:13.342 07:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:15:13.342 07:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:13.342 07:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:15:13.342 07:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:15:13.342 07:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:13.603 07:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2278033 00:15:13.603 07:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:13.603 07:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:13.603 07:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:13.603 07:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:13.603 00:15:13.603 real 0m6.450s 00:15:13.603 user 0m0.027s 00:15:13.603 sys 0m0.079s 00:15:13.603 07:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:13.603 07:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:15:13.603 ************************************ 00:15:13.603 END TEST filesystem_in_capsule_ext4 00:15:13.603 ************************************ 00:15:13.603 07:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:15:13.603 07:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:13.603 07:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:13.603 07:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:13.603 ************************************ 00:15:13.603 START TEST filesystem_in_capsule_btrfs 00:15:13.603 ************************************ 00:15:13.603 07:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:15:13.603 07:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:15:13.603 07:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:13.603 07:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:15:13.603 07:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:15:13.603 07:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:15:13.603 07:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:15:13.603 07:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:15:13.603 07:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:15:13.603 07:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:15:13.603 07:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:15:13.603 btrfs-progs v6.8.1 00:15:13.603 See https://btrfs.readthedocs.io for more information. 00:15:13.603 00:15:13.603 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:15:13.603 NOTE: several default settings have changed in version 5.15, please make sure 00:15:13.603 this does not affect your deployments: 00:15:13.603 - DUP for metadata (-m dup) 00:15:13.603 - enabled no-holes (-O no-holes) 00:15:13.603 - enabled free-space-tree (-R free-space-tree) 00:15:13.603 00:15:13.603 Label: (null) 00:15:13.603 UUID: f660408e-2047-42cc-b653-8c1f57b27f17 00:15:13.603 Node size: 16384 00:15:13.603 Sector size: 4096 (CPU page size: 4096) 00:15:13.603 Filesystem size: 510.00MiB 00:15:13.603 Block group profiles: 00:15:13.603 Data: single 8.00MiB 00:15:13.603 Metadata: DUP 32.00MiB 00:15:13.603 System: DUP 8.00MiB 00:15:13.603 SSD detected: yes 00:15:13.603 Zoned device: no 00:15:13.603 Features: extref, skinny-metadata, no-holes, free-space-tree 00:15:13.603 Checksum: crc32c 00:15:13.603 Number of devices: 1 00:15:13.603 Devices: 00:15:13.603 ID SIZE PATH 00:15:13.603 1 510.00MiB /dev/nvme0n1p1 00:15:13.603 00:15:13.603 07:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:15:13.603 07:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:14.986 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:14.986 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:15:14.986 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:14.986 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:15:14.986 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:15:14.986 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:14.986 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2278033 00:15:14.986 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:14.986 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:14.986 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:14.986 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:14.986 00:15:14.986 real 0m1.433s 00:15:14.986 user 0m0.019s 00:15:14.986 sys 0m0.134s 00:15:14.986 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:14.986 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:15:14.986 ************************************ 00:15:14.986 END TEST filesystem_in_capsule_btrfs 00:15:14.986 ************************************ 00:15:14.986 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:15:14.986 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:14.986 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:14.986 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:14.987 ************************************ 00:15:14.987 START TEST filesystem_in_capsule_xfs 00:15:14.987 ************************************ 00:15:14.987 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:15:14.987 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:15:14.987 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:14.987 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:15:14.987 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:15:14.987 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:15:14.987 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:15:14.987 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:15:14.987 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:15:14.987 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:15:14.987 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:15:15.247 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:15:15.247 = sectsz=512 attr=2, projid32bit=1 00:15:15.247 = crc=1 finobt=1, sparse=1, rmapbt=0 00:15:15.247 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:15:15.247 data = bsize=4096 blocks=130560, imaxpct=25 00:15:15.247 = sunit=0 swidth=0 blks 00:15:15.247 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:15:15.247 log =internal log bsize=4096 blocks=16384, version=2 00:15:15.247 = sectsz=512 sunit=0 blks, lazy-count=1 00:15:15.247 realtime =none extsz=4096 blocks=0, rtextents=0 00:15:16.187 Discarding blocks...Done. 00:15:16.187 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:15:16.187 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:18.099 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:18.099 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:15:18.099 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:18.099 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:15:18.099 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:15:18.099 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:18.099 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2278033 00:15:18.099 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:18.099 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:18.099 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:18.100 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:18.100 00:15:18.100 real 0m2.956s 00:15:18.100 user 0m0.026s 00:15:18.100 sys 0m0.080s 00:15:18.100 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:18.100 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:15:18.100 ************************************ 00:15:18.100 END TEST filesystem_in_capsule_xfs 00:15:18.100 ************************************ 00:15:18.100 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:15:18.360 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:15:18.361 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:18.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.622 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:18.622 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:15:18.622 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:18.622 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:18.622 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:18.622 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:18.622 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:15:18.622 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:18.622 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.622 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:18.622 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.622 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:18.622 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2278033 00:15:18.622 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2278033 ']' 00:15:18.622 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2278033 00:15:18.622 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:15:18.622 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:18.622 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2278033 00:15:18.622 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:18.622 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:18.622 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2278033' 00:15:18.622 killing process with pid 2278033 00:15:18.622 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2278033 00:15:18.622 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2278033 00:15:18.884 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:15:18.884 00:15:18.884 real 0m18.669s 00:15:18.884 user 1m13.883s 00:15:18.884 sys 0m1.426s 00:15:18.884 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:18.884 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:18.884 ************************************ 00:15:18.884 END TEST nvmf_filesystem_in_capsule 00:15:18.884 ************************************ 00:15:18.884 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:15:18.884 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:18.884 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:15:18.884 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:18.884 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:15:18.884 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:18.884 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:18.884 rmmod nvme_tcp 00:15:18.884 rmmod nvme_fabrics 00:15:18.884 rmmod nvme_keyring 00:15:18.884 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:18.884 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:15:18.884 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:15:18.884 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:15:18.884 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:18.884 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:18.884 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:18.884 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:15:18.884 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:15:18.884 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:18.884 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:15:18.884 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:18.884 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:18.884 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:18.884 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:18.884 07:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.431 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:21.431 00:15:21.431 real 0m50.535s 00:15:21.431 user 2m40.978s 00:15:21.431 sys 0m8.953s 00:15:21.431 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:21.431 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:21.431 ************************************ 00:15:21.431 END TEST nvmf_filesystem 00:15:21.431 ************************************ 00:15:21.431 07:10:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:15:21.431 07:10:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:21.431 07:10:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:21.431 07:10:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:21.431 ************************************ 00:15:21.431 START TEST nvmf_target_discovery 00:15:21.431 ************************************ 00:15:21.431 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:15:21.431 * Looking for test storage... 00:15:21.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:21.431 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:21.431 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:15:21.431 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:21.431 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:21.431 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:21.431 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:21.431 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:21.431 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:15:21.431 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:15:21.431 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:15:21.431 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:15:21.431 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:15:21.431 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:15:21.431 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:15:21.431 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:21.431 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:15:21.431 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:15:21.431 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:21.431 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:21.431 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:15:21.431 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:15:21.431 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:21.431 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:15:21.431 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:15:21.431 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:21.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.432 --rc genhtml_branch_coverage=1 00:15:21.432 --rc genhtml_function_coverage=1 00:15:21.432 --rc genhtml_legend=1 00:15:21.432 --rc geninfo_all_blocks=1 00:15:21.432 --rc geninfo_unexecuted_blocks=1 00:15:21.432 00:15:21.432 ' 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:21.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.432 --rc genhtml_branch_coverage=1 00:15:21.432 --rc genhtml_function_coverage=1 00:15:21.432 --rc genhtml_legend=1 00:15:21.432 --rc geninfo_all_blocks=1 00:15:21.432 --rc geninfo_unexecuted_blocks=1 00:15:21.432 00:15:21.432 ' 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:21.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.432 --rc genhtml_branch_coverage=1 00:15:21.432 --rc genhtml_function_coverage=1 00:15:21.432 --rc genhtml_legend=1 00:15:21.432 --rc geninfo_all_blocks=1 00:15:21.432 --rc geninfo_unexecuted_blocks=1 00:15:21.432 00:15:21.432 ' 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:21.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.432 --rc genhtml_branch_coverage=1 00:15:21.432 --rc genhtml_function_coverage=1 00:15:21.432 --rc genhtml_legend=1 00:15:21.432 --rc geninfo_all_blocks=1 00:15:21.432 --rc geninfo_unexecuted_blocks=1 00:15:21.432 00:15:21.432 ' 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:21.432 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:15:21.432 07:10:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:29.572 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:29.572 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:29.572 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:29.572 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:29.573 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:29.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:29.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:15:29.573 00:15:29.573 --- 10.0.0.2 ping statistics --- 00:15:29.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.573 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:29.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:29.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:15:29.573 00:15:29.573 --- 10.0.0.1 ping statistics --- 00:15:29.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.573 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2286108 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2286108 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2286108 ']' 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:29.573 07:10:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:29.573 [2024-11-27 07:10:39.963831] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:15:29.573 [2024-11-27 07:10:39.963900] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.573 [2024-11-27 07:10:40.064970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:29.573 [2024-11-27 07:10:40.122117] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:29.573 [2024-11-27 07:10:40.122186] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:29.573 [2024-11-27 07:10:40.122197] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:29.573 [2024-11-27 07:10:40.122205] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:29.573 [2024-11-27 07:10:40.122212] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:29.573 [2024-11-27 07:10:40.124316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.573 [2024-11-27 07:10:40.124484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:29.573 [2024-11-27 07:10:40.124539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.573 [2024-11-27 07:10:40.124539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:29.835 [2024-11-27 07:10:40.848164] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:29.835 Null1 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:29.835 [2024-11-27 07:10:40.918407] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:29.835 Null2 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:29.835 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:15:29.836 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.836 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:29.836 Null3 00:15:29.836 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.836 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:15:29.836 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.836 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:29.836 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.836 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:15:29.836 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.836 07:10:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:29.836 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.836 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:15:29.836 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.836 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:29.836 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.836 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:29.836 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:15:29.836 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.836 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:29.836 Null4 00:15:29.836 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.836 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:15:29.836 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.836 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.098 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.098 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:15:30.098 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.098 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.098 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.098 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:15:30.098 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.098 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.098 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.098 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:30.098 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.098 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.098 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.098 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:15:30.098 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.098 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.098 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.098 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:15:30.098 00:15:30.098 Discovery Log Number of Records 6, Generation counter 6 00:15:30.098 =====Discovery Log Entry 0====== 00:15:30.098 trtype: tcp 00:15:30.098 adrfam: ipv4 00:15:30.098 subtype: current discovery subsystem 00:15:30.098 treq: not required 00:15:30.098 portid: 0 00:15:30.098 trsvcid: 4420 00:15:30.098 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:30.098 traddr: 10.0.0.2 00:15:30.098 eflags: explicit discovery connections, duplicate discovery information 00:15:30.098 sectype: none 00:15:30.098 =====Discovery Log Entry 1====== 00:15:30.098 trtype: tcp 00:15:30.098 adrfam: ipv4 00:15:30.098 subtype: nvme subsystem 00:15:30.098 treq: not required 00:15:30.098 portid: 0 00:15:30.098 trsvcid: 4420 00:15:30.098 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:30.098 traddr: 10.0.0.2 00:15:30.098 eflags: none 00:15:30.098 sectype: none 00:15:30.098 =====Discovery Log Entry 2====== 00:15:30.098 trtype: tcp 00:15:30.098 adrfam: ipv4 00:15:30.098 subtype: nvme subsystem 00:15:30.098 treq: not required 00:15:30.098 portid: 0 00:15:30.098 trsvcid: 4420 00:15:30.098 subnqn: nqn.2016-06.io.spdk:cnode2 00:15:30.098 traddr: 10.0.0.2 00:15:30.098 eflags: none 00:15:30.098 sectype: none 00:15:30.098 =====Discovery Log Entry 3====== 00:15:30.098 trtype: tcp 00:15:30.098 adrfam: ipv4 00:15:30.099 subtype: nvme subsystem 00:15:30.099 treq: not required 00:15:30.099 portid: 0 00:15:30.099 trsvcid: 4420 00:15:30.099 subnqn: nqn.2016-06.io.spdk:cnode3 00:15:30.099 traddr: 10.0.0.2 00:15:30.099 eflags: none 00:15:30.099 sectype: none 00:15:30.099 =====Discovery Log Entry 4====== 00:15:30.099 trtype: tcp 00:15:30.099 adrfam: ipv4 00:15:30.099 subtype: nvme subsystem 00:15:30.099 treq: not required 00:15:30.099 portid: 0 00:15:30.099 trsvcid: 4420 00:15:30.099 subnqn: nqn.2016-06.io.spdk:cnode4 00:15:30.099 traddr: 10.0.0.2 00:15:30.099 eflags: none 00:15:30.099 sectype: none 00:15:30.099 =====Discovery Log Entry 5====== 00:15:30.099 trtype: tcp 00:15:30.099 adrfam: ipv4 00:15:30.099 subtype: discovery subsystem referral 00:15:30.099 treq: not required 00:15:30.099 portid: 0 00:15:30.099 trsvcid: 4430 00:15:30.099 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:30.099 traddr: 10.0.0.2 00:15:30.099 eflags: none 00:15:30.099 sectype: none 00:15:30.099 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:15:30.099 Perform nvmf subsystem discovery via RPC 00:15:30.099 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:15:30.099 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.099 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.099 [ 00:15:30.099 { 00:15:30.099 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:30.099 "subtype": "Discovery", 00:15:30.099 "listen_addresses": [ 00:15:30.099 { 00:15:30.099 "trtype": "TCP", 00:15:30.099 "adrfam": "IPv4", 00:15:30.099 "traddr": "10.0.0.2", 00:15:30.099 "trsvcid": "4420" 00:15:30.099 } 00:15:30.099 ], 00:15:30.099 "allow_any_host": true, 00:15:30.099 "hosts": [] 00:15:30.099 }, 00:15:30.099 { 00:15:30.099 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:30.099 "subtype": "NVMe", 00:15:30.099 "listen_addresses": [ 00:15:30.099 { 00:15:30.099 "trtype": "TCP", 00:15:30.099 "adrfam": "IPv4", 00:15:30.099 "traddr": "10.0.0.2", 00:15:30.099 "trsvcid": "4420" 00:15:30.099 } 00:15:30.099 ], 00:15:30.099 "allow_any_host": true, 00:15:30.099 "hosts": [], 00:15:30.099 "serial_number": "SPDK00000000000001", 00:15:30.099 "model_number": "SPDK bdev Controller", 00:15:30.099 "max_namespaces": 32, 00:15:30.099 "min_cntlid": 1, 00:15:30.099 "max_cntlid": 65519, 00:15:30.099 "namespaces": [ 00:15:30.099 { 00:15:30.099 "nsid": 1, 00:15:30.099 "bdev_name": "Null1", 00:15:30.099 "name": "Null1", 00:15:30.099 "nguid": "74FDE2EBEFCD4545A83BCD471EB49E6E", 00:15:30.099 "uuid": "74fde2eb-efcd-4545-a83b-cd471eb49e6e" 00:15:30.099 } 00:15:30.099 ] 00:15:30.099 }, 00:15:30.099 { 00:15:30.099 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:30.099 "subtype": "NVMe", 00:15:30.099 "listen_addresses": [ 00:15:30.099 { 00:15:30.099 "trtype": "TCP", 00:15:30.099 "adrfam": "IPv4", 00:15:30.099 "traddr": "10.0.0.2", 00:15:30.099 "trsvcid": "4420" 00:15:30.099 } 00:15:30.099 ], 00:15:30.099 "allow_any_host": true, 00:15:30.099 "hosts": [], 00:15:30.099 "serial_number": "SPDK00000000000002", 00:15:30.099 "model_number": "SPDK bdev Controller", 00:15:30.099 "max_namespaces": 32, 00:15:30.099 "min_cntlid": 1, 00:15:30.099 "max_cntlid": 65519, 00:15:30.099 "namespaces": [ 00:15:30.099 { 00:15:30.099 "nsid": 1, 00:15:30.099 "bdev_name": "Null2", 00:15:30.099 "name": "Null2", 00:15:30.099 "nguid": "69E21C6DA03E4CBDB33C8691F7D08910", 00:15:30.099 "uuid": "69e21c6d-a03e-4cbd-b33c-8691f7d08910" 00:15:30.099 } 00:15:30.099 ] 00:15:30.099 }, 00:15:30.099 { 00:15:30.099 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:15:30.099 "subtype": "NVMe", 00:15:30.099 "listen_addresses": [ 00:15:30.099 { 00:15:30.099 "trtype": "TCP", 00:15:30.099 "adrfam": "IPv4", 00:15:30.099 "traddr": "10.0.0.2", 00:15:30.099 "trsvcid": "4420" 00:15:30.099 } 00:15:30.099 ], 00:15:30.099 "allow_any_host": true, 00:15:30.099 "hosts": [], 00:15:30.099 "serial_number": "SPDK00000000000003", 00:15:30.099 "model_number": "SPDK bdev Controller", 00:15:30.099 "max_namespaces": 32, 00:15:30.099 "min_cntlid": 1, 00:15:30.099 "max_cntlid": 65519, 00:15:30.099 "namespaces": [ 00:15:30.099 { 00:15:30.099 "nsid": 1, 00:15:30.099 "bdev_name": "Null3", 00:15:30.099 "name": "Null3", 00:15:30.099 "nguid": "084E2BE86F5E43B8B1FC60C6E54CDE49", 00:15:30.099 "uuid": "084e2be8-6f5e-43b8-b1fc-60c6e54cde49" 00:15:30.099 } 00:15:30.099 ] 00:15:30.099 }, 00:15:30.099 { 00:15:30.099 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:15:30.099 "subtype": "NVMe", 00:15:30.099 "listen_addresses": [ 00:15:30.099 { 00:15:30.099 "trtype": "TCP", 00:15:30.099 "adrfam": "IPv4", 00:15:30.099 "traddr": "10.0.0.2", 00:15:30.099 "trsvcid": "4420" 00:15:30.099 } 00:15:30.099 ], 00:15:30.099 "allow_any_host": true, 00:15:30.099 "hosts": [], 00:15:30.099 "serial_number": "SPDK00000000000004", 00:15:30.099 "model_number": "SPDK bdev Controller", 00:15:30.099 "max_namespaces": 32, 00:15:30.099 "min_cntlid": 1, 00:15:30.099 "max_cntlid": 65519, 00:15:30.099 "namespaces": [ 00:15:30.099 { 00:15:30.099 "nsid": 1, 00:15:30.099 "bdev_name": "Null4", 00:15:30.099 "name": "Null4", 00:15:30.099 "nguid": "57E1B7100D8244D7A7DB832481EA47A7", 00:15:30.099 "uuid": "57e1b710-0d82-44d7-a7db-832481ea47a7" 00:15:30.099 } 00:15:30.099 ] 00:15:30.099 } 00:15:30.099 ] 00:15:30.099 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.099 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:15:30.361 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:30.361 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:30.361 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.361 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.361 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.361 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:15:30.361 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.361 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.361 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.361 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:30.361 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:15:30.361 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.361 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.361 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.361 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:15:30.361 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.361 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.361 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.361 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:30.361 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:15:30.361 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:30.362 rmmod nvme_tcp 00:15:30.362 rmmod nvme_fabrics 00:15:30.362 rmmod nvme_keyring 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2286108 ']' 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2286108 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2286108 ']' 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2286108 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:30.362 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2286108 00:15:30.623 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:30.623 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:30.623 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2286108' 00:15:30.623 killing process with pid 2286108 00:15:30.623 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2286108 00:15:30.623 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2286108 00:15:30.623 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:30.623 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:30.623 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:30.623 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:15:30.623 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:15:30.623 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:30.623 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:15:30.623 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:30.623 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:30.623 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.623 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:30.623 07:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.169 07:10:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:33.169 00:15:33.169 real 0m11.695s 00:15:33.169 user 0m8.913s 00:15:33.169 sys 0m6.134s 00:15:33.169 07:10:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:33.169 07:10:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.169 ************************************ 00:15:33.169 END TEST nvmf_target_discovery 00:15:33.169 ************************************ 00:15:33.169 07:10:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:15:33.169 07:10:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:33.169 07:10:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:33.170 07:10:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:33.170 ************************************ 00:15:33.170 START TEST nvmf_referrals 00:15:33.170 ************************************ 00:15:33.170 07:10:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:15:33.170 * Looking for test storage... 00:15:33.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:33.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.170 --rc genhtml_branch_coverage=1 00:15:33.170 --rc genhtml_function_coverage=1 00:15:33.170 --rc genhtml_legend=1 00:15:33.170 --rc geninfo_all_blocks=1 00:15:33.170 --rc geninfo_unexecuted_blocks=1 00:15:33.170 00:15:33.170 ' 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:33.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.170 --rc genhtml_branch_coverage=1 00:15:33.170 --rc genhtml_function_coverage=1 00:15:33.170 --rc genhtml_legend=1 00:15:33.170 --rc geninfo_all_blocks=1 00:15:33.170 --rc geninfo_unexecuted_blocks=1 00:15:33.170 00:15:33.170 ' 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:33.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.170 --rc genhtml_branch_coverage=1 00:15:33.170 --rc genhtml_function_coverage=1 00:15:33.170 --rc genhtml_legend=1 00:15:33.170 --rc geninfo_all_blocks=1 00:15:33.170 --rc geninfo_unexecuted_blocks=1 00:15:33.170 00:15:33.170 ' 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:33.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.170 --rc genhtml_branch_coverage=1 00:15:33.170 --rc genhtml_function_coverage=1 00:15:33.170 --rc genhtml_legend=1 00:15:33.170 --rc geninfo_all_blocks=1 00:15:33.170 --rc geninfo_unexecuted_blocks=1 00:15:33.170 00:15:33.170 ' 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:33.170 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:33.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:33.171 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:33.171 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:33.171 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:33.171 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:15:33.171 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:15:33.171 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:15:33.171 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:15:33.171 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:33.171 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:15:33.171 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:15:33.171 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:33.171 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:33.171 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:33.171 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:33.171 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:33.171 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.171 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:33.171 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.171 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:33.171 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:33.171 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:15:33.171 07:10:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:41.314 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:41.314 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:15:41.314 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:41.314 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:41.314 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:41.314 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:41.314 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:41.314 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:15:41.314 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:41.314 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:15:41.314 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:15:41.314 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:15:41.314 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:15:41.314 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:15:41.314 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:15:41.314 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:41.314 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:41.314 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:41.314 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:41.314 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:41.314 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:41.314 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:41.314 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:41.314 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:41.315 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:41.315 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:41.315 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:41.315 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:41.315 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:41.315 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:15:41.315 00:15:41.315 --- 10.0.0.2 ping statistics --- 00:15:41.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.315 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:41.315 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:41.315 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:15:41.315 00:15:41.315 --- 10.0.0.1 ping statistics --- 00:15:41.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.315 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2290663 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2290663 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2290663 ']' 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:41.315 07:10:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:41.315 [2024-11-27 07:10:51.834358] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:15:41.315 [2024-11-27 07:10:51.834424] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:41.316 [2024-11-27 07:10:51.933745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:41.316 [2024-11-27 07:10:51.986824] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:41.316 [2024-11-27 07:10:51.986876] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:41.316 [2024-11-27 07:10:51.986885] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:41.316 [2024-11-27 07:10:51.986893] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:41.316 [2024-11-27 07:10:51.986899] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:41.316 [2024-11-27 07:10:51.989251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.316 [2024-11-27 07:10:51.989420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:41.316 [2024-11-27 07:10:51.989582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.316 [2024-11-27 07:10:51.989582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:41.578 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:41.578 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:15:41.578 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:41.578 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:41.578 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:41.578 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:41.578 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:41.578 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.578 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:41.578 [2024-11-27 07:10:52.707400] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:41.578 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.578 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:15:41.578 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.578 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:41.578 [2024-11-27 07:10:52.735397] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:41.578 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.578 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:15:41.578 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.578 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:41.578 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.578 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:15:41.578 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.578 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:41.578 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.578 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:15:41.578 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.578 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:41.578 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.578 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:41.840 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:15:41.840 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.840 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:41.840 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.840 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:15:41.840 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:15:41.840 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:15:41.840 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:41.840 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:15:41.840 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.840 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:41.840 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:15:41.840 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.840 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:15:41.840 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:15:41.840 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:15:41.840 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:41.840 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:41.840 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:41.840 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:41.840 07:10:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:42.102 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:15:42.102 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:15:42.102 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:15:42.102 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.102 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:42.102 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.102 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:15:42.102 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.102 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:42.102 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.102 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:15:42.102 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.102 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:42.102 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.102 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:42.102 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:15:42.102 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.102 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:42.102 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.102 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:15:42.102 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:15:42.102 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:42.102 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:42.102 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:42.102 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:42.102 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:42.365 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:15:42.365 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:15:42.365 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:15:42.365 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.365 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:42.365 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.365 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:15:42.365 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.365 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:42.365 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.365 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:15:42.365 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:15:42.365 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:42.365 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:15:42.365 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.365 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:42.365 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:15:42.365 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.365 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:15:42.365 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:15:42.365 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:15:42.365 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:42.365 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:42.365 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:42.365 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:42.365 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:42.629 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:15:42.629 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:15:42.629 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:15:42.629 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:15:42.629 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:15:42.629 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:42.629 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:15:42.892 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:42.892 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:15:42.892 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:15:42.892 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:15:42.892 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:42.892 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:15:43.154 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:15:43.154 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:15:43.154 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.154 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:43.154 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.155 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:15:43.155 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:15:43.155 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:43.155 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:15:43.155 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.155 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:15:43.155 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:43.155 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.155 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:15:43.155 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:15:43.155 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:15:43.155 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:43.155 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:43.155 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:43.155 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:43.155 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:43.417 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:15:43.417 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:15:43.417 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:15:43.417 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:15:43.417 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:15:43.417 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:43.417 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:15:43.417 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:15:43.417 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:15:43.417 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:15:43.417 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:15:43.417 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:43.417 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:15:43.692 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:15:43.692 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:15:43.692 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.692 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:43.692 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.692 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:43.692 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:15:43.692 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.692 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:43.692 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.692 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:15:43.692 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:15:43.692 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:43.692 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:43.692 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:43.692 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:43.692 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:43.953 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:15:43.953 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:15:43.953 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:15:43.953 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:15:43.953 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:43.953 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:15:43.953 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:43.953 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:15:43.953 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:43.953 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:43.953 rmmod nvme_tcp 00:15:43.953 rmmod nvme_fabrics 00:15:43.953 rmmod nvme_keyring 00:15:43.953 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:44.215 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:15:44.215 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:15:44.215 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2290663 ']' 00:15:44.215 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2290663 00:15:44.215 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2290663 ']' 00:15:44.215 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2290663 00:15:44.215 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:15:44.215 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:44.215 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2290663 00:15:44.215 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:44.215 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:44.215 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2290663' 00:15:44.215 killing process with pid 2290663 00:15:44.215 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2290663 00:15:44.215 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2290663 00:15:44.215 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:44.215 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:44.215 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:44.215 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:15:44.215 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:15:44.215 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:44.215 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:15:44.215 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:44.215 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:44.215 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:44.215 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:44.215 07:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:46.768 00:15:46.768 real 0m13.499s 00:15:46.768 user 0m16.518s 00:15:46.768 sys 0m6.673s 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:46.768 ************************************ 00:15:46.768 END TEST nvmf_referrals 00:15:46.768 ************************************ 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:46.768 ************************************ 00:15:46.768 START TEST nvmf_connect_disconnect 00:15:46.768 ************************************ 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:15:46.768 * Looking for test storage... 00:15:46.768 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:46.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.768 --rc genhtml_branch_coverage=1 00:15:46.768 --rc genhtml_function_coverage=1 00:15:46.768 --rc genhtml_legend=1 00:15:46.768 --rc geninfo_all_blocks=1 00:15:46.768 --rc geninfo_unexecuted_blocks=1 00:15:46.768 00:15:46.768 ' 00:15:46.768 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:46.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.769 --rc genhtml_branch_coverage=1 00:15:46.769 --rc genhtml_function_coverage=1 00:15:46.769 --rc genhtml_legend=1 00:15:46.769 --rc geninfo_all_blocks=1 00:15:46.769 --rc geninfo_unexecuted_blocks=1 00:15:46.769 00:15:46.769 ' 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:46.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.769 --rc genhtml_branch_coverage=1 00:15:46.769 --rc genhtml_function_coverage=1 00:15:46.769 --rc genhtml_legend=1 00:15:46.769 --rc geninfo_all_blocks=1 00:15:46.769 --rc geninfo_unexecuted_blocks=1 00:15:46.769 00:15:46.769 ' 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:46.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.769 --rc genhtml_branch_coverage=1 00:15:46.769 --rc genhtml_function_coverage=1 00:15:46.769 --rc genhtml_legend=1 00:15:46.769 --rc geninfo_all_blocks=1 00:15:46.769 --rc geninfo_unexecuted_blocks=1 00:15:46.769 00:15:46.769 ' 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:46.769 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:15:46.769 07:10:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:54.933 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:54.933 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:15:54.933 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:54.933 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:54.933 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:54.933 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:54.933 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:54.933 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:15:54.933 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:54.933 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:15:54.933 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:15:54.933 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:15:54.933 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:15:54.933 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:15:54.933 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:15:54.933 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:54.933 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:54.933 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:54.933 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:54.933 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:54.933 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:54.933 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:54.933 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:54.933 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:54.933 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:54.933 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:54.933 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:54.933 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:54.933 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:54.933 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:54.933 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:54.933 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:54.933 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:54.934 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:54.934 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:54.934 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:54.934 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:54.934 07:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:54.934 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:54.934 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:54.934 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:54.934 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:54.934 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:54.934 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:54.934 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:54.934 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:54.934 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:54.934 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:15:54.934 00:15:54.934 --- 10.0.0.2 ping statistics --- 00:15:54.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.934 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:15:54.934 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:54.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:54.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:15:54.934 00:15:54.934 --- 10.0.0.1 ping statistics --- 00:15:54.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.934 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:15:54.934 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:54.934 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:15:54.934 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:54.934 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:54.934 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:54.934 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:54.934 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:54.934 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:54.934 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:54.934 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:15:54.934 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:54.934 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:54.934 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:54.934 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2295733 00:15:54.934 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2295733 00:15:54.934 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:54.934 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2295733 ']' 00:15:54.934 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.934 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:54.934 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.934 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:54.934 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:54.934 [2024-11-27 07:11:05.318405] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:15:54.934 [2024-11-27 07:11:05.318477] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:54.935 [2024-11-27 07:11:05.420295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:54.935 [2024-11-27 07:11:05.472628] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:54.935 [2024-11-27 07:11:05.472685] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:54.935 [2024-11-27 07:11:05.472700] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:54.935 [2024-11-27 07:11:05.472707] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:54.935 [2024-11-27 07:11:05.472713] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:54.935 [2024-11-27 07:11:05.475107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.935 [2024-11-27 07:11:05.475270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:54.935 [2024-11-27 07:11:05.475320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.935 [2024-11-27 07:11:05.475319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:55.196 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:55.196 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:15:55.196 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:55.196 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:55.196 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:55.196 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:55.196 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:15:55.196 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.196 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:55.196 [2024-11-27 07:11:06.193541] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:55.196 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.196 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:15:55.196 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.196 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:55.196 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.196 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:15:55.196 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:55.196 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.196 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:55.196 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.196 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:55.196 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.196 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:55.196 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.196 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:55.196 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.196 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:55.196 [2024-11-27 07:11:06.272621] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:55.196 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.196 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:15:55.196 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:15:55.196 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:15:59.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.705 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.695 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:13.695 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:13.695 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:13.695 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:13.695 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:13.695 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:13.695 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:13.695 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:13.695 rmmod nvme_tcp 00:16:13.695 rmmod nvme_fabrics 00:16:13.695 rmmod nvme_keyring 00:16:13.695 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:13.695 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:13.695 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:13.695 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2295733 ']' 00:16:13.695 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2295733 00:16:13.695 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2295733 ']' 00:16:13.695 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2295733 00:16:13.695 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:16:13.695 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:13.695 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2295733 00:16:13.695 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:13.695 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:13.695 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2295733' 00:16:13.695 killing process with pid 2295733 00:16:13.695 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2295733 00:16:13.695 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2295733 00:16:13.695 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:13.695 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:13.695 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:13.695 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:13.695 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:13.695 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:16:13.695 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:16:13.695 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:13.695 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:13.695 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.695 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:13.695 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.239 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:16.239 00:16:16.239 real 0m29.366s 00:16:16.239 user 1m19.099s 00:16:16.239 sys 0m7.191s 00:16:16.239 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:16.239 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:16.239 ************************************ 00:16:16.239 END TEST nvmf_connect_disconnect 00:16:16.239 ************************************ 00:16:16.239 07:11:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:16.239 07:11:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:16.239 07:11:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:16.239 07:11:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:16.239 ************************************ 00:16:16.239 START TEST nvmf_multitarget 00:16:16.239 ************************************ 00:16:16.239 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:16.239 * Looking for test storage... 00:16:16.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:16.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.239 --rc genhtml_branch_coverage=1 00:16:16.239 --rc genhtml_function_coverage=1 00:16:16.239 --rc genhtml_legend=1 00:16:16.239 --rc geninfo_all_blocks=1 00:16:16.239 --rc geninfo_unexecuted_blocks=1 00:16:16.239 00:16:16.239 ' 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:16.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.239 --rc genhtml_branch_coverage=1 00:16:16.239 --rc genhtml_function_coverage=1 00:16:16.239 --rc genhtml_legend=1 00:16:16.239 --rc geninfo_all_blocks=1 00:16:16.239 --rc geninfo_unexecuted_blocks=1 00:16:16.239 00:16:16.239 ' 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:16.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.239 --rc genhtml_branch_coverage=1 00:16:16.239 --rc genhtml_function_coverage=1 00:16:16.239 --rc genhtml_legend=1 00:16:16.239 --rc geninfo_all_blocks=1 00:16:16.239 --rc geninfo_unexecuted_blocks=1 00:16:16.239 00:16:16.239 ' 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:16.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.239 --rc genhtml_branch_coverage=1 00:16:16.239 --rc genhtml_function_coverage=1 00:16:16.239 --rc genhtml_legend=1 00:16:16.239 --rc geninfo_all_blocks=1 00:16:16.239 --rc geninfo_unexecuted_blocks=1 00:16:16.239 00:16:16.239 ' 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:16.239 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:16.240 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:16.240 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:16.240 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:16.240 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.240 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.240 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.240 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:16.240 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.240 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:16.240 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:16.240 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:16.240 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:16.240 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:16.240 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:16.240 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:16.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:16.240 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:16.240 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:16.240 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:16.240 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:16.240 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:16.240 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:16.240 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:16.240 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:16.240 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:16.240 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:16.240 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.240 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:16.240 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.240 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:16.240 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:16.240 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:16.240 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:24.385 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:24.385 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:24.385 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:24.385 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:24.385 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:24.385 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:24.385 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:24.385 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:24.385 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:24.385 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:24.385 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:24.385 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:24.385 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:24.385 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:24.385 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:24.385 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:24.385 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:24.385 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:24.385 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:24.385 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:24.385 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:24.385 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:24.385 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:24.385 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:24.385 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:24.385 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:24.385 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:24.385 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:24.385 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:24.385 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:24.385 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:24.385 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:24.385 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:24.386 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:24.386 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:24.386 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:24.386 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:24.386 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:24.386 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.701 ms 00:16:24.386 00:16:24.386 --- 10.0.0.2 ping statistics --- 00:16:24.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.386 rtt min/avg/max/mdev = 0.701/0.701/0.701/0.000 ms 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:24.386 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:24.386 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:16:24.386 00:16:24.386 --- 10.0.0.1 ping statistics --- 00:16:24.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.386 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2303686 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2303686 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2303686 ']' 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:24.386 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:24.386 [2024-11-27 07:11:34.707513] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:16:24.386 [2024-11-27 07:11:34.707581] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.386 [2024-11-27 07:11:34.808606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:24.386 [2024-11-27 07:11:34.862287] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:24.386 [2024-11-27 07:11:34.862339] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:24.386 [2024-11-27 07:11:34.862347] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:24.386 [2024-11-27 07:11:34.862354] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:24.386 [2024-11-27 07:11:34.862361] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:24.386 [2024-11-27 07:11:34.864455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.387 [2024-11-27 07:11:34.864614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:24.387 [2024-11-27 07:11:34.864780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.387 [2024-11-27 07:11:34.864780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:24.387 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:24.387 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:16:24.387 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:24.387 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:24.387 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:24.387 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:24.387 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:24.648 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:24.648 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:24.648 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:24.648 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:24.648 "nvmf_tgt_1" 00:16:24.648 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:24.910 "nvmf_tgt_2" 00:16:24.910 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:24.910 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:24.910 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:24.910 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:25.172 true 00:16:25.172 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:25.172 true 00:16:25.172 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:25.172 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:25.434 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:25.434 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:25.434 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:25.434 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:25.434 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:25.434 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:25.434 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:25.434 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:25.434 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:25.434 rmmod nvme_tcp 00:16:25.434 rmmod nvme_fabrics 00:16:25.434 rmmod nvme_keyring 00:16:25.434 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:25.434 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:25.434 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:25.434 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2303686 ']' 00:16:25.434 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2303686 00:16:25.434 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2303686 ']' 00:16:25.434 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2303686 00:16:25.434 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:16:25.434 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:25.434 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2303686 00:16:25.434 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:25.434 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:25.434 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2303686' 00:16:25.434 killing process with pid 2303686 00:16:25.434 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2303686 00:16:25.434 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2303686 00:16:25.696 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:25.696 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:25.696 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:25.696 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:25.696 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:16:25.696 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:16:25.696 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:25.696 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:25.696 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:25.696 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.696 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:25.696 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.609 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:27.609 00:16:27.609 real 0m11.821s 00:16:27.609 user 0m10.299s 00:16:27.609 sys 0m6.167s 00:16:27.609 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:27.609 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:27.609 ************************************ 00:16:27.609 END TEST nvmf_multitarget 00:16:27.609 ************************************ 00:16:27.870 07:11:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:27.870 07:11:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:27.870 07:11:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:27.870 07:11:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:27.870 ************************************ 00:16:27.870 START TEST nvmf_rpc 00:16:27.870 ************************************ 00:16:27.870 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:27.870 * Looking for test storage... 00:16:27.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:27.870 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:27.870 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:16:27.870 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:27.870 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:27.870 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:27.870 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:27.870 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:27.870 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:27.870 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:27.870 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:27.870 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:27.870 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:27.870 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:27.870 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:27.870 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:27.870 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:27.870 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:27.870 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:27.870 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:27.870 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:27.870 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:27.870 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:27.870 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:27.870 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:27.870 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:27.870 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:27.870 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:27.870 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:27.870 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:27.871 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:27.871 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:27.871 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:27.871 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:27.871 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:27.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.871 --rc genhtml_branch_coverage=1 00:16:27.871 --rc genhtml_function_coverage=1 00:16:27.871 --rc genhtml_legend=1 00:16:27.871 --rc geninfo_all_blocks=1 00:16:27.871 --rc geninfo_unexecuted_blocks=1 00:16:27.871 00:16:27.871 ' 00:16:27.871 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:27.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.871 --rc genhtml_branch_coverage=1 00:16:27.871 --rc genhtml_function_coverage=1 00:16:27.871 --rc genhtml_legend=1 00:16:27.871 --rc geninfo_all_blocks=1 00:16:27.871 --rc geninfo_unexecuted_blocks=1 00:16:27.871 00:16:27.871 ' 00:16:27.871 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:27.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.871 --rc genhtml_branch_coverage=1 00:16:27.871 --rc genhtml_function_coverage=1 00:16:27.871 --rc genhtml_legend=1 00:16:27.871 --rc geninfo_all_blocks=1 00:16:27.871 --rc geninfo_unexecuted_blocks=1 00:16:27.871 00:16:27.871 ' 00:16:27.871 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:27.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.871 --rc genhtml_branch_coverage=1 00:16:27.871 --rc genhtml_function_coverage=1 00:16:27.871 --rc genhtml_legend=1 00:16:27.871 --rc geninfo_all_blocks=1 00:16:27.871 --rc geninfo_unexecuted_blocks=1 00:16:27.871 00:16:27.871 ' 00:16:27.871 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:27.871 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:28.136 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:28.136 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:28.136 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:28.136 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:28.136 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:28.136 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:28.136 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:28.136 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:28.136 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:28.136 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:28.136 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:28.136 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:28.136 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:28.136 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:28.136 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:28.136 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:28.136 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:28.136 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:28.136 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:28.136 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:28.136 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:28.136 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.136 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.137 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.137 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:28.137 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.137 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:28.137 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:28.137 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:28.137 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:28.137 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:28.137 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:28.137 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:28.137 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:28.137 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:28.137 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:28.137 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:28.137 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:28.137 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:28.137 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:28.137 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:28.137 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:28.137 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:28.137 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:28.137 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.137 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:28.137 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.137 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:28.137 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:28.137 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:28.137 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:36.283 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:36.283 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:36.283 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:36.284 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:36.284 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:36.284 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:36.284 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:16:36.284 00:16:36.284 --- 10.0.0.2 ping statistics --- 00:16:36.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.284 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:36.284 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:36.284 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:16:36.284 00:16:36.284 --- 10.0.0.1 ping statistics --- 00:16:36.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.284 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2308260 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2308260 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2308260 ']' 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:36.284 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.284 [2024-11-27 07:11:46.672853] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:16:36.284 [2024-11-27 07:11:46.672920] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:36.284 [2024-11-27 07:11:46.772377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:36.284 [2024-11-27 07:11:46.825609] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:36.284 [2024-11-27 07:11:46.825661] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:36.284 [2024-11-27 07:11:46.825670] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:36.284 [2024-11-27 07:11:46.825677] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:36.284 [2024-11-27 07:11:46.825683] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:36.284 [2024-11-27 07:11:46.827862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:36.284 [2024-11-27 07:11:46.828027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:36.284 [2024-11-27 07:11:46.828235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:36.284 [2024-11-27 07:11:46.828263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.547 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:36.547 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:36.547 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:36.547 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:36.547 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.547 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:36.547 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:36.547 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.547 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.547 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.547 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:36.547 "tick_rate": 2400000000, 00:16:36.547 "poll_groups": [ 00:16:36.547 { 00:16:36.547 "name": "nvmf_tgt_poll_group_000", 00:16:36.547 "admin_qpairs": 0, 00:16:36.547 "io_qpairs": 0, 00:16:36.547 "current_admin_qpairs": 0, 00:16:36.547 "current_io_qpairs": 0, 00:16:36.547 "pending_bdev_io": 0, 00:16:36.547 "completed_nvme_io": 0, 00:16:36.547 "transports": [] 00:16:36.547 }, 00:16:36.547 { 00:16:36.547 "name": "nvmf_tgt_poll_group_001", 00:16:36.547 "admin_qpairs": 0, 00:16:36.547 "io_qpairs": 0, 00:16:36.547 "current_admin_qpairs": 0, 00:16:36.547 "current_io_qpairs": 0, 00:16:36.547 "pending_bdev_io": 0, 00:16:36.547 "completed_nvme_io": 0, 00:16:36.547 "transports": [] 00:16:36.547 }, 00:16:36.547 { 00:16:36.547 "name": "nvmf_tgt_poll_group_002", 00:16:36.547 "admin_qpairs": 0, 00:16:36.547 "io_qpairs": 0, 00:16:36.547 "current_admin_qpairs": 0, 00:16:36.547 "current_io_qpairs": 0, 00:16:36.547 "pending_bdev_io": 0, 00:16:36.547 "completed_nvme_io": 0, 00:16:36.547 "transports": [] 00:16:36.547 }, 00:16:36.547 { 00:16:36.547 "name": "nvmf_tgt_poll_group_003", 00:16:36.547 "admin_qpairs": 0, 00:16:36.547 "io_qpairs": 0, 00:16:36.547 "current_admin_qpairs": 0, 00:16:36.547 "current_io_qpairs": 0, 00:16:36.547 "pending_bdev_io": 0, 00:16:36.547 "completed_nvme_io": 0, 00:16:36.547 "transports": [] 00:16:36.547 } 00:16:36.547 ] 00:16:36.547 }' 00:16:36.547 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:36.547 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:36.547 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:36.547 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:36.547 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:36.547 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:36.547 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:36.547 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:36.547 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.547 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.547 [2024-11-27 07:11:47.662615] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:36.547 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.547 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:36.547 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.547 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.547 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.547 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:36.547 "tick_rate": 2400000000, 00:16:36.547 "poll_groups": [ 00:16:36.547 { 00:16:36.547 "name": "nvmf_tgt_poll_group_000", 00:16:36.547 "admin_qpairs": 0, 00:16:36.547 "io_qpairs": 0, 00:16:36.547 "current_admin_qpairs": 0, 00:16:36.547 "current_io_qpairs": 0, 00:16:36.547 "pending_bdev_io": 0, 00:16:36.547 "completed_nvme_io": 0, 00:16:36.547 "transports": [ 00:16:36.547 { 00:16:36.547 "trtype": "TCP" 00:16:36.547 } 00:16:36.547 ] 00:16:36.547 }, 00:16:36.547 { 00:16:36.547 "name": "nvmf_tgt_poll_group_001", 00:16:36.547 "admin_qpairs": 0, 00:16:36.547 "io_qpairs": 0, 00:16:36.547 "current_admin_qpairs": 0, 00:16:36.547 "current_io_qpairs": 0, 00:16:36.547 "pending_bdev_io": 0, 00:16:36.547 "completed_nvme_io": 0, 00:16:36.547 "transports": [ 00:16:36.547 { 00:16:36.547 "trtype": "TCP" 00:16:36.547 } 00:16:36.547 ] 00:16:36.547 }, 00:16:36.547 { 00:16:36.547 "name": "nvmf_tgt_poll_group_002", 00:16:36.547 "admin_qpairs": 0, 00:16:36.547 "io_qpairs": 0, 00:16:36.547 "current_admin_qpairs": 0, 00:16:36.547 "current_io_qpairs": 0, 00:16:36.547 "pending_bdev_io": 0, 00:16:36.547 "completed_nvme_io": 0, 00:16:36.547 "transports": [ 00:16:36.547 { 00:16:36.547 "trtype": "TCP" 00:16:36.547 } 00:16:36.547 ] 00:16:36.547 }, 00:16:36.547 { 00:16:36.547 "name": "nvmf_tgt_poll_group_003", 00:16:36.547 "admin_qpairs": 0, 00:16:36.547 "io_qpairs": 0, 00:16:36.547 "current_admin_qpairs": 0, 00:16:36.547 "current_io_qpairs": 0, 00:16:36.547 "pending_bdev_io": 0, 00:16:36.547 "completed_nvme_io": 0, 00:16:36.547 "transports": [ 00:16:36.547 { 00:16:36.547 "trtype": "TCP" 00:16:36.547 } 00:16:36.547 ] 00:16:36.547 } 00:16:36.547 ] 00:16:36.547 }' 00:16:36.547 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:36.547 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:36.547 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:36.547 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:36.547 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:36.547 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:36.547 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:36.809 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:36.809 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:36.809 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:36.809 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:36.809 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:36.809 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:36.809 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:36.809 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.809 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.809 Malloc1 00:16:36.809 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.809 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:36.809 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.809 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.809 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.809 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:36.809 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.809 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.809 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.809 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:36.809 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.809 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.809 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.809 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:36.809 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.809 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.809 [2024-11-27 07:11:47.870905] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:36.809 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.809 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:16:36.809 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:36.809 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:16:36.809 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:36.809 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:36.809 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:36.809 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:36.809 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:36.809 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:36.809 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:36.809 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:36.809 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:16:36.809 [2024-11-27 07:11:47.907885] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:16:36.810 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:36.810 could not add new controller: failed to write to nvme-fabrics device 00:16:36.810 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:36.810 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:36.810 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:36.810 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:36.810 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:36.810 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.810 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.810 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.810 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:38.730 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:38.730 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:38.730 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:38.730 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:38.730 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:40.651 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:40.651 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:40.651 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:40.651 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:40.651 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:40.651 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:40.651 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:40.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:40.651 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:40.651 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:40.651 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:40.651 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:40.651 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:40.651 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:40.651 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:40.651 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:40.651 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.651 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.651 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.652 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:40.652 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:40.652 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:40.652 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:40.652 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:40.652 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:40.652 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:40.652 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:40.652 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:40.652 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:40.652 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:40.652 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:40.652 [2024-11-27 07:11:51.636945] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:16:40.652 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:40.652 could not add new controller: failed to write to nvme-fabrics device 00:16:40.652 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:40.652 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:40.652 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:40.652 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:40.652 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:40.652 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.652 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.652 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.652 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:42.033 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:42.033 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:42.033 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:42.033 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:42.033 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:44.574 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:44.574 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:44.574 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:44.574 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:44.574 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:44.574 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:44.574 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:44.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:44.575 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:44.575 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:44.575 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:44.575 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:44.575 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:44.575 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:44.575 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:44.575 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:44.575 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.575 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.575 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.575 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:44.575 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:44.575 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:44.575 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.575 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.575 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.575 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:44.575 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.575 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.575 [2024-11-27 07:11:55.363307] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:44.575 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.575 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:44.575 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.575 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.575 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.575 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:44.575 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.575 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.575 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.575 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:45.955 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:45.955 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:45.955 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:45.955 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:45.955 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:47.866 07:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:47.866 07:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:47.866 07:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:47.866 07:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:47.866 07:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:47.866 07:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:47.866 07:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:47.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:47.866 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:47.866 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:47.866 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:47.866 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:47.866 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:47.866 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:48.132 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:48.132 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:48.132 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.132 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.132 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.132 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:48.132 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.133 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.133 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.133 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:48.133 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:48.133 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.133 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.133 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.133 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:48.133 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.133 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.133 [2024-11-27 07:11:59.125188] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:48.133 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.133 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:48.133 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.133 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.133 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.133 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:48.133 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.133 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.133 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.133 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:49.524 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:49.524 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:49.524 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:49.524 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:49.524 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:52.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.072 [2024-11-27 07:12:02.873064] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.072 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:53.457 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:53.457 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:53.457 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:53.457 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:53.457 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:55.369 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:55.369 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:55.369 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:55.369 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:55.369 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:55.369 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:55.369 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:55.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:55.370 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:55.370 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:55.370 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:55.370 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:55.370 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:55.370 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:55.631 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:55.631 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:55.631 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.631 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.631 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.631 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:55.631 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.631 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.631 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.631 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:55.631 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:55.631 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.631 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.631 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.631 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:55.631 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.631 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.631 [2024-11-27 07:12:06.625058] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:55.631 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.631 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:55.631 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.631 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.631 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.631 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:55.631 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.631 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.631 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.631 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:57.018 07:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:57.018 07:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:57.018 07:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:57.018 07:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:57.018 07:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:59.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.564 [2024-11-27 07:12:10.375005] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.564 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:00.949 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:00.949 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:00.949 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:00.949 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:00.949 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:02.861 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:02.861 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:02.861 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:02.861 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:02.861 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:02.861 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:02.861 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:02.861 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:02.861 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:02.861 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:02.861 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:02.861 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.123 [2024-11-27 07:12:14.141890] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.123 [2024-11-27 07:12:14.210051] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.123 [2024-11-27 07:12:14.274254] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.123 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.124 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.124 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:03.124 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:03.124 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.124 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.385 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.385 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:03.385 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.385 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.385 [2024-11-27 07:12:14.342453] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:03.385 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.385 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:03.385 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.385 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.385 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.385 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:03.385 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.385 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.385 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.385 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:03.385 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.385 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.385 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.385 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:03.385 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.385 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.385 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.385 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:03.385 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:03.385 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.385 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.385 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.385 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:03.385 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.386 [2024-11-27 07:12:14.410679] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:03.386 "tick_rate": 2400000000, 00:17:03.386 "poll_groups": [ 00:17:03.386 { 00:17:03.386 "name": "nvmf_tgt_poll_group_000", 00:17:03.386 "admin_qpairs": 0, 00:17:03.386 "io_qpairs": 224, 00:17:03.386 "current_admin_qpairs": 0, 00:17:03.386 "current_io_qpairs": 0, 00:17:03.386 "pending_bdev_io": 0, 00:17:03.386 "completed_nvme_io": 275, 00:17:03.386 "transports": [ 00:17:03.386 { 00:17:03.386 "trtype": "TCP" 00:17:03.386 } 00:17:03.386 ] 00:17:03.386 }, 00:17:03.386 { 00:17:03.386 "name": "nvmf_tgt_poll_group_001", 00:17:03.386 "admin_qpairs": 1, 00:17:03.386 "io_qpairs": 223, 00:17:03.386 "current_admin_qpairs": 0, 00:17:03.386 "current_io_qpairs": 0, 00:17:03.386 "pending_bdev_io": 0, 00:17:03.386 "completed_nvme_io": 517, 00:17:03.386 "transports": [ 00:17:03.386 { 00:17:03.386 "trtype": "TCP" 00:17:03.386 } 00:17:03.386 ] 00:17:03.386 }, 00:17:03.386 { 00:17:03.386 "name": "nvmf_tgt_poll_group_002", 00:17:03.386 "admin_qpairs": 6, 00:17:03.386 "io_qpairs": 218, 00:17:03.386 "current_admin_qpairs": 0, 00:17:03.386 "current_io_qpairs": 0, 00:17:03.386 "pending_bdev_io": 0, 00:17:03.386 "completed_nvme_io": 218, 00:17:03.386 "transports": [ 00:17:03.386 { 00:17:03.386 "trtype": "TCP" 00:17:03.386 } 00:17:03.386 ] 00:17:03.386 }, 00:17:03.386 { 00:17:03.386 "name": "nvmf_tgt_poll_group_003", 00:17:03.386 "admin_qpairs": 0, 00:17:03.386 "io_qpairs": 224, 00:17:03.386 "current_admin_qpairs": 0, 00:17:03.386 "current_io_qpairs": 0, 00:17:03.386 "pending_bdev_io": 0, 00:17:03.386 "completed_nvme_io": 229, 00:17:03.386 "transports": [ 00:17:03.386 { 00:17:03.386 "trtype": "TCP" 00:17:03.386 } 00:17:03.386 ] 00:17:03.386 } 00:17:03.386 ] 00:17:03.386 }' 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:03.386 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:03.386 rmmod nvme_tcp 00:17:03.647 rmmod nvme_fabrics 00:17:03.647 rmmod nvme_keyring 00:17:03.647 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:03.647 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:03.647 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:03.647 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2308260 ']' 00:17:03.647 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2308260 00:17:03.647 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2308260 ']' 00:17:03.647 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2308260 00:17:03.647 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:17:03.647 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:03.647 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2308260 00:17:03.647 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:03.647 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:03.647 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2308260' 00:17:03.647 killing process with pid 2308260 00:17:03.647 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2308260 00:17:03.647 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2308260 00:17:03.647 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:03.647 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:03.647 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:03.647 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:03.647 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:17:03.647 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:03.647 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:17:03.647 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:03.647 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:03.647 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.647 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:03.647 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.196 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:06.196 00:17:06.196 real 0m38.050s 00:17:06.196 user 1m54.009s 00:17:06.196 sys 0m7.867s 00:17:06.196 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:06.197 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.197 ************************************ 00:17:06.197 END TEST nvmf_rpc 00:17:06.197 ************************************ 00:17:06.197 07:12:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:06.197 07:12:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:06.197 07:12:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:06.197 07:12:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:06.197 ************************************ 00:17:06.197 START TEST nvmf_invalid 00:17:06.197 ************************************ 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:06.197 * Looking for test storage... 00:17:06.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:06.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.197 --rc genhtml_branch_coverage=1 00:17:06.197 --rc genhtml_function_coverage=1 00:17:06.197 --rc genhtml_legend=1 00:17:06.197 --rc geninfo_all_blocks=1 00:17:06.197 --rc geninfo_unexecuted_blocks=1 00:17:06.197 00:17:06.197 ' 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:06.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.197 --rc genhtml_branch_coverage=1 00:17:06.197 --rc genhtml_function_coverage=1 00:17:06.197 --rc genhtml_legend=1 00:17:06.197 --rc geninfo_all_blocks=1 00:17:06.197 --rc geninfo_unexecuted_blocks=1 00:17:06.197 00:17:06.197 ' 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:06.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.197 --rc genhtml_branch_coverage=1 00:17:06.197 --rc genhtml_function_coverage=1 00:17:06.197 --rc genhtml_legend=1 00:17:06.197 --rc geninfo_all_blocks=1 00:17:06.197 --rc geninfo_unexecuted_blocks=1 00:17:06.197 00:17:06.197 ' 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:06.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.197 --rc genhtml_branch_coverage=1 00:17:06.197 --rc genhtml_function_coverage=1 00:17:06.197 --rc genhtml_legend=1 00:17:06.197 --rc geninfo_all_blocks=1 00:17:06.197 --rc geninfo_unexecuted_blocks=1 00:17:06.197 00:17:06.197 ' 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:06.197 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:06.198 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:06.198 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:06.198 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:06.198 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:06.198 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:06.198 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:06.198 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:06.198 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:06.198 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:06.198 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:06.198 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:06.198 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:06.198 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:06.198 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:06.198 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:06.198 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:06.198 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:06.198 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.198 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:06.198 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.198 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:06.198 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:06.198 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:06.198 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:14.334 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:14.334 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:14.334 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:14.334 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:14.334 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:14.334 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:14.334 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:14.334 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:14.334 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:14.335 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:14.335 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:14.335 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:14.335 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:14.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:14.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.796 ms 00:17:14.335 00:17:14.335 --- 10.0.0.2 ping statistics --- 00:17:14.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.335 rtt min/avg/max/mdev = 0.796/0.796/0.796/0.000 ms 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:14.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:14.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:17:14.335 00:17:14.335 --- 10.0.0.1 ping statistics --- 00:17:14.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.335 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:14.335 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:14.336 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:14.336 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:14.336 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2318682 00:17:14.336 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2318682 00:17:14.336 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:14.336 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2318682 ']' 00:17:14.336 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.336 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:14.336 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.336 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:14.336 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:14.336 [2024-11-27 07:12:24.746515] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:17:14.336 [2024-11-27 07:12:24.746587] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.336 [2024-11-27 07:12:24.844567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:14.336 [2024-11-27 07:12:24.897053] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.336 [2024-11-27 07:12:24.897106] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.336 [2024-11-27 07:12:24.897116] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:14.336 [2024-11-27 07:12:24.897123] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:14.336 [2024-11-27 07:12:24.897130] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.336 [2024-11-27 07:12:24.899207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.336 [2024-11-27 07:12:24.899333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:14.336 [2024-11-27 07:12:24.899495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:14.336 [2024-11-27 07:12:24.899497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.598 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:14.598 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:17:14.598 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:14.598 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:14.598 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:14.598 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:14.598 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:14.598 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode7715 00:17:14.598 [2024-11-27 07:12:25.782070] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:14.860 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:14.860 { 00:17:14.860 "nqn": "nqn.2016-06.io.spdk:cnode7715", 00:17:14.860 "tgt_name": "foobar", 00:17:14.860 "method": "nvmf_create_subsystem", 00:17:14.860 "req_id": 1 00:17:14.860 } 00:17:14.860 Got JSON-RPC error response 00:17:14.860 response: 00:17:14.860 { 00:17:14.860 "code": -32603, 00:17:14.860 "message": "Unable to find target foobar" 00:17:14.860 }' 00:17:14.860 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:14.860 { 00:17:14.860 "nqn": "nqn.2016-06.io.spdk:cnode7715", 00:17:14.860 "tgt_name": "foobar", 00:17:14.860 "method": "nvmf_create_subsystem", 00:17:14.860 "req_id": 1 00:17:14.860 } 00:17:14.860 Got JSON-RPC error response 00:17:14.860 response: 00:17:14.860 { 00:17:14.860 "code": -32603, 00:17:14.860 "message": "Unable to find target foobar" 00:17:14.860 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:14.860 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:14.860 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode17284 00:17:14.860 [2024-11-27 07:12:25.990922] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17284: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:14.860 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:14.860 { 00:17:14.860 "nqn": "nqn.2016-06.io.spdk:cnode17284", 00:17:14.860 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:14.860 "method": "nvmf_create_subsystem", 00:17:14.860 "req_id": 1 00:17:14.860 } 00:17:14.860 Got JSON-RPC error response 00:17:14.860 response: 00:17:14.860 { 00:17:14.860 "code": -32602, 00:17:14.860 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:14.860 }' 00:17:14.860 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:14.860 { 00:17:14.860 "nqn": "nqn.2016-06.io.spdk:cnode17284", 00:17:14.860 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:14.860 "method": "nvmf_create_subsystem", 00:17:14.860 "req_id": 1 00:17:14.860 } 00:17:14.860 Got JSON-RPC error response 00:17:14.860 response: 00:17:14.860 { 00:17:14.860 "code": -32602, 00:17:14.860 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:14.860 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:14.860 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:14.860 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode4431 00:17:15.123 [2024-11-27 07:12:26.199657] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4431: invalid model number 'SPDK_Controller' 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:15.123 { 00:17:15.123 "nqn": "nqn.2016-06.io.spdk:cnode4431", 00:17:15.123 "model_number": "SPDK_Controller\u001f", 00:17:15.123 "method": "nvmf_create_subsystem", 00:17:15.123 "req_id": 1 00:17:15.123 } 00:17:15.123 Got JSON-RPC error response 00:17:15.123 response: 00:17:15.123 { 00:17:15.123 "code": -32602, 00:17:15.123 "message": "Invalid MN SPDK_Controller\u001f" 00:17:15.123 }' 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:15.123 { 00:17:15.123 "nqn": "nqn.2016-06.io.spdk:cnode4431", 00:17:15.123 "model_number": "SPDK_Controller\u001f", 00:17:15.123 "method": "nvmf_create_subsystem", 00:17:15.123 "req_id": 1 00:17:15.123 } 00:17:15.123 Got JSON-RPC error response 00:17:15.123 response: 00:17:15.123 { 00:17:15.123 "code": -32602, 00:17:15.123 "message": "Invalid MN SPDK_Controller\u001f" 00:17:15.123 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.123 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:17:15.386 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:17:15.387 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:17:15.387 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.387 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.387 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:17:15.387 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:17:15.387 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:17:15.387 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.387 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.387 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ & == \- ]] 00:17:15.387 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '&0,@/XD+vC?PgXG3TNd'\''r' 00:17:15.387 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '&0,@/XD+vC?PgXG3TNd'\''r' nqn.2016-06.io.spdk:cnode6300 00:17:15.387 [2024-11-27 07:12:26.589043] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6300: invalid serial number '&0,@/XD+vC?PgXG3TNd'r' 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:15.650 { 00:17:15.650 "nqn": "nqn.2016-06.io.spdk:cnode6300", 00:17:15.650 "serial_number": "&0,@/XD+vC?PgXG3TNd'\''r", 00:17:15.650 "method": "nvmf_create_subsystem", 00:17:15.650 "req_id": 1 00:17:15.650 } 00:17:15.650 Got JSON-RPC error response 00:17:15.650 response: 00:17:15.650 { 00:17:15.650 "code": -32602, 00:17:15.650 "message": "Invalid SN &0,@/XD+vC?PgXG3TNd'\''r" 00:17:15.650 }' 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:15.650 { 00:17:15.650 "nqn": "nqn.2016-06.io.spdk:cnode6300", 00:17:15.650 "serial_number": "&0,@/XD+vC?PgXG3TNd'r", 00:17:15.650 "method": "nvmf_create_subsystem", 00:17:15.650 "req_id": 1 00:17:15.650 } 00:17:15.650 Got JSON-RPC error response 00:17:15.650 response: 00:17:15.650 { 00:17:15.650 "code": -32602, 00:17:15.650 "message": "Invalid SN &0,@/XD+vC?PgXG3TNd'r" 00:17:15.650 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.650 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.651 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.942 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.943 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:15.943 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:15.943 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:15.943 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.943 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.943 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:17:15.943 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:17:15.943 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:17:15.943 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:15.943 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:15.943 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ | == \- ]] 00:17:15.943 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '|%2W}yA}y@2QL~2HzqD^c\RP37rI_71=b|TJ[GA]' 00:17:15.943 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '|%2W}yA}y@2QL~2HzqD^c\RP37rI_71=b|TJ[GA]' nqn.2016-06.io.spdk:cnode11270 00:17:16.301 [2024-11-27 07:12:27.131120] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11270: invalid model number '|%2W}yA}y@2QL~2HzqD^c\RP37rI_71=b|TJ[GA]' 00:17:16.301 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:16.301 { 00:17:16.301 "nqn": "nqn.2016-06.io.spdk:cnode11270", 00:17:16.301 "model_number": "|%2W}yA}y@2QL~2HzqD^c\\RP37rI_71=b|TJ\u007f[GA]", 00:17:16.301 "method": "nvmf_create_subsystem", 00:17:16.301 "req_id": 1 00:17:16.301 } 00:17:16.301 Got JSON-RPC error response 00:17:16.301 response: 00:17:16.301 { 00:17:16.301 "code": -32602, 00:17:16.301 "message": "Invalid MN |%2W}yA}y@2QL~2HzqD^c\\RP37rI_71=b|TJ\u007f[GA]" 00:17:16.301 }' 00:17:16.301 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:16.301 { 00:17:16.301 "nqn": "nqn.2016-06.io.spdk:cnode11270", 00:17:16.301 "model_number": "|%2W}yA}y@2QL~2HzqD^c\\RP37rI_71=b|TJ\u007f[GA]", 00:17:16.301 "method": "nvmf_create_subsystem", 00:17:16.301 "req_id": 1 00:17:16.301 } 00:17:16.301 Got JSON-RPC error response 00:17:16.301 response: 00:17:16.301 { 00:17:16.301 "code": -32602, 00:17:16.301 "message": "Invalid MN |%2W}yA}y@2QL~2HzqD^c\\RP37rI_71=b|TJ\u007f[GA]" 00:17:16.301 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:16.301 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:16.301 [2024-11-27 07:12:27.327912] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:16.301 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:16.614 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:16.614 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:16.614 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:16.614 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:16.614 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:16.614 [2024-11-27 07:12:27.741545] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:16.614 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:16.614 { 00:17:16.614 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:16.614 "listen_address": { 00:17:16.614 "trtype": "tcp", 00:17:16.614 "traddr": "", 00:17:16.614 "trsvcid": "4421" 00:17:16.614 }, 00:17:16.614 "method": "nvmf_subsystem_remove_listener", 00:17:16.614 "req_id": 1 00:17:16.614 } 00:17:16.614 Got JSON-RPC error response 00:17:16.614 response: 00:17:16.614 { 00:17:16.614 "code": -32602, 00:17:16.614 "message": "Invalid parameters" 00:17:16.614 }' 00:17:16.614 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:16.614 { 00:17:16.614 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:16.614 "listen_address": { 00:17:16.614 "trtype": "tcp", 00:17:16.614 "traddr": "", 00:17:16.614 "trsvcid": "4421" 00:17:16.614 }, 00:17:16.614 "method": "nvmf_subsystem_remove_listener", 00:17:16.614 "req_id": 1 00:17:16.614 } 00:17:16.614 Got JSON-RPC error response 00:17:16.614 response: 00:17:16.614 { 00:17:16.614 "code": -32602, 00:17:16.614 "message": "Invalid parameters" 00:17:16.614 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:16.615 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26592 -i 0 00:17:16.875 [2024-11-27 07:12:27.942186] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26592: invalid cntlid range [0-65519] 00:17:16.875 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:16.875 { 00:17:16.875 "nqn": "nqn.2016-06.io.spdk:cnode26592", 00:17:16.875 "min_cntlid": 0, 00:17:16.875 "method": "nvmf_create_subsystem", 00:17:16.875 "req_id": 1 00:17:16.875 } 00:17:16.875 Got JSON-RPC error response 00:17:16.875 response: 00:17:16.875 { 00:17:16.875 "code": -32602, 00:17:16.875 "message": "Invalid cntlid range [0-65519]" 00:17:16.875 }' 00:17:16.875 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:16.875 { 00:17:16.875 "nqn": "nqn.2016-06.io.spdk:cnode26592", 00:17:16.875 "min_cntlid": 0, 00:17:16.875 "method": "nvmf_create_subsystem", 00:17:16.875 "req_id": 1 00:17:16.875 } 00:17:16.875 Got JSON-RPC error response 00:17:16.875 response: 00:17:16.875 { 00:17:16.875 "code": -32602, 00:17:16.875 "message": "Invalid cntlid range [0-65519]" 00:17:16.875 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:16.875 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26083 -i 65520 00:17:17.135 [2024-11-27 07:12:28.126712] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26083: invalid cntlid range [65520-65519] 00:17:17.135 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:17.135 { 00:17:17.135 "nqn": "nqn.2016-06.io.spdk:cnode26083", 00:17:17.135 "min_cntlid": 65520, 00:17:17.135 "method": "nvmf_create_subsystem", 00:17:17.135 "req_id": 1 00:17:17.135 } 00:17:17.135 Got JSON-RPC error response 00:17:17.135 response: 00:17:17.135 { 00:17:17.135 "code": -32602, 00:17:17.135 "message": "Invalid cntlid range [65520-65519]" 00:17:17.135 }' 00:17:17.135 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:17.135 { 00:17:17.135 "nqn": "nqn.2016-06.io.spdk:cnode26083", 00:17:17.135 "min_cntlid": 65520, 00:17:17.135 "method": "nvmf_create_subsystem", 00:17:17.135 "req_id": 1 00:17:17.135 } 00:17:17.135 Got JSON-RPC error response 00:17:17.135 response: 00:17:17.135 { 00:17:17.135 "code": -32602, 00:17:17.135 "message": "Invalid cntlid range [65520-65519]" 00:17:17.135 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:17.135 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4781 -I 0 00:17:17.135 [2024-11-27 07:12:28.315321] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4781: invalid cntlid range [1-0] 00:17:17.395 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:17.395 { 00:17:17.395 "nqn": "nqn.2016-06.io.spdk:cnode4781", 00:17:17.395 "max_cntlid": 0, 00:17:17.395 "method": "nvmf_create_subsystem", 00:17:17.395 "req_id": 1 00:17:17.395 } 00:17:17.395 Got JSON-RPC error response 00:17:17.395 response: 00:17:17.395 { 00:17:17.395 "code": -32602, 00:17:17.395 "message": "Invalid cntlid range [1-0]" 00:17:17.395 }' 00:17:17.395 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:17.395 { 00:17:17.395 "nqn": "nqn.2016-06.io.spdk:cnode4781", 00:17:17.395 "max_cntlid": 0, 00:17:17.395 "method": "nvmf_create_subsystem", 00:17:17.395 "req_id": 1 00:17:17.395 } 00:17:17.395 Got JSON-RPC error response 00:17:17.395 response: 00:17:17.395 { 00:17:17.395 "code": -32602, 00:17:17.395 "message": "Invalid cntlid range [1-0]" 00:17:17.395 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:17.395 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3990 -I 65520 00:17:17.395 [2024-11-27 07:12:28.495984] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3990: invalid cntlid range [1-65520] 00:17:17.395 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:17.395 { 00:17:17.395 "nqn": "nqn.2016-06.io.spdk:cnode3990", 00:17:17.395 "max_cntlid": 65520, 00:17:17.395 "method": "nvmf_create_subsystem", 00:17:17.395 "req_id": 1 00:17:17.395 } 00:17:17.395 Got JSON-RPC error response 00:17:17.395 response: 00:17:17.395 { 00:17:17.395 "code": -32602, 00:17:17.395 "message": "Invalid cntlid range [1-65520]" 00:17:17.395 }' 00:17:17.396 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:17.396 { 00:17:17.396 "nqn": "nqn.2016-06.io.spdk:cnode3990", 00:17:17.396 "max_cntlid": 65520, 00:17:17.396 "method": "nvmf_create_subsystem", 00:17:17.396 "req_id": 1 00:17:17.396 } 00:17:17.396 Got JSON-RPC error response 00:17:17.396 response: 00:17:17.396 { 00:17:17.396 "code": -32602, 00:17:17.396 "message": "Invalid cntlid range [1-65520]" 00:17:17.396 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:17.396 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30312 -i 6 -I 5 00:17:17.657 [2024-11-27 07:12:28.680551] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30312: invalid cntlid range [6-5] 00:17:17.657 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:17.657 { 00:17:17.657 "nqn": "nqn.2016-06.io.spdk:cnode30312", 00:17:17.657 "min_cntlid": 6, 00:17:17.657 "max_cntlid": 5, 00:17:17.657 "method": "nvmf_create_subsystem", 00:17:17.657 "req_id": 1 00:17:17.657 } 00:17:17.657 Got JSON-RPC error response 00:17:17.657 response: 00:17:17.657 { 00:17:17.657 "code": -32602, 00:17:17.657 "message": "Invalid cntlid range [6-5]" 00:17:17.657 }' 00:17:17.657 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:17.657 { 00:17:17.657 "nqn": "nqn.2016-06.io.spdk:cnode30312", 00:17:17.657 "min_cntlid": 6, 00:17:17.657 "max_cntlid": 5, 00:17:17.657 "method": "nvmf_create_subsystem", 00:17:17.657 "req_id": 1 00:17:17.657 } 00:17:17.657 Got JSON-RPC error response 00:17:17.657 response: 00:17:17.657 { 00:17:17.657 "code": -32602, 00:17:17.657 "message": "Invalid cntlid range [6-5]" 00:17:17.657 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:17.657 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:17.657 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:17.657 { 00:17:17.657 "name": "foobar", 00:17:17.657 "method": "nvmf_delete_target", 00:17:17.658 "req_id": 1 00:17:17.658 } 00:17:17.658 Got JSON-RPC error response 00:17:17.658 response: 00:17:17.658 { 00:17:17.658 "code": -32602, 00:17:17.658 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:17.658 }' 00:17:17.658 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:17.658 { 00:17:17.658 "name": "foobar", 00:17:17.658 "method": "nvmf_delete_target", 00:17:17.658 "req_id": 1 00:17:17.658 } 00:17:17.658 Got JSON-RPC error response 00:17:17.658 response: 00:17:17.658 { 00:17:17.658 "code": -32602, 00:17:17.658 "message": "The specified target doesn't exist, cannot delete it." 00:17:17.658 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:17.658 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:17.658 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:17.658 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:17.658 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:17.658 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:17.658 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:17.658 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:17.658 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:17.658 rmmod nvme_tcp 00:17:17.658 rmmod nvme_fabrics 00:17:17.658 rmmod nvme_keyring 00:17:17.919 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:17.919 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:17.919 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:17.919 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2318682 ']' 00:17:17.919 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2318682 00:17:17.919 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2318682 ']' 00:17:17.919 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2318682 00:17:17.919 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:17:17.919 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:17.919 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2318682 00:17:17.919 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:17.919 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:17.919 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2318682' 00:17:17.919 killing process with pid 2318682 00:17:17.919 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2318682 00:17:17.919 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2318682 00:17:17.919 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:17.919 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:17.919 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:17.919 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:17:17.919 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:17:17.919 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:17.919 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:17:17.919 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:17.919 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:17.919 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.919 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:17.919 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:20.488 00:17:20.488 real 0m14.131s 00:17:20.488 user 0m21.222s 00:17:20.488 sys 0m6.724s 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:20.488 ************************************ 00:17:20.488 END TEST nvmf_invalid 00:17:20.488 ************************************ 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:20.488 ************************************ 00:17:20.488 START TEST nvmf_connect_stress 00:17:20.488 ************************************ 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:20.488 * Looking for test storage... 00:17:20.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:20.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.488 --rc genhtml_branch_coverage=1 00:17:20.488 --rc genhtml_function_coverage=1 00:17:20.488 --rc genhtml_legend=1 00:17:20.488 --rc geninfo_all_blocks=1 00:17:20.488 --rc geninfo_unexecuted_blocks=1 00:17:20.488 00:17:20.488 ' 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:20.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.488 --rc genhtml_branch_coverage=1 00:17:20.488 --rc genhtml_function_coverage=1 00:17:20.488 --rc genhtml_legend=1 00:17:20.488 --rc geninfo_all_blocks=1 00:17:20.488 --rc geninfo_unexecuted_blocks=1 00:17:20.488 00:17:20.488 ' 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:20.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.488 --rc genhtml_branch_coverage=1 00:17:20.488 --rc genhtml_function_coverage=1 00:17:20.488 --rc genhtml_legend=1 00:17:20.488 --rc geninfo_all_blocks=1 00:17:20.488 --rc geninfo_unexecuted_blocks=1 00:17:20.488 00:17:20.488 ' 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:20.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.488 --rc genhtml_branch_coverage=1 00:17:20.488 --rc genhtml_function_coverage=1 00:17:20.488 --rc genhtml_legend=1 00:17:20.488 --rc geninfo_all_blocks=1 00:17:20.488 --rc geninfo_unexecuted_blocks=1 00:17:20.488 00:17:20.488 ' 00:17:20.488 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:20.489 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:20.489 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:28.629 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:28.629 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:28.629 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:28.629 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:28.629 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:28.629 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:28.629 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:28.629 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:28.629 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:28.629 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:28.629 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:28.629 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:28.629 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:28.629 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:28.629 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:28.629 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:28.629 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:28.629 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:28.629 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:28.629 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:28.629 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:28.629 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:28.629 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:28.629 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:28.629 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:28.629 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:28.629 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:28.629 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:28.629 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:28.629 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:28.629 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:28.629 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:28.629 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:28.630 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:28.630 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:28.630 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:28.630 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:28.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:28.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:17:28.630 00:17:28.630 --- 10.0.0.2 ping statistics --- 00:17:28.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.630 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:28.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:28.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:17:28.630 00:17:28.630 --- 10.0.0.1 ping statistics --- 00:17:28.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.630 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2323867 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2323867 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2323867 ']' 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:28.630 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:28.630 [2024-11-27 07:12:39.049575] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:17:28.630 [2024-11-27 07:12:39.049643] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:28.630 [2024-11-27 07:12:39.151996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:28.630 [2024-11-27 07:12:39.203327] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:28.630 [2024-11-27 07:12:39.203381] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:28.631 [2024-11-27 07:12:39.203390] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:28.631 [2024-11-27 07:12:39.203397] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:28.631 [2024-11-27 07:12:39.203404] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:28.631 [2024-11-27 07:12:39.205230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:28.631 [2024-11-27 07:12:39.205395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:28.631 [2024-11-27 07:12:39.205396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:28.893 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:28.893 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:17:28.893 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:28.894 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:28.894 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:28.894 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:28.894 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:28.894 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.894 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:28.894 [2024-11-27 07:12:39.930326] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:28.894 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.894 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:28.894 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.894 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:28.894 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.894 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:28.894 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.894 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:28.894 [2024-11-27 07:12:39.956059] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:28.894 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.894 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:28.894 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.894 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:28.894 NULL1 00:17:28.894 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.894 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2324001 00:17:28.894 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:28.894 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:28.894 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:28.894 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:28.894 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:28.894 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:28.894 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:28.894 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:28.894 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:28.894 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:28.894 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:28.894 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:28.894 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:28.894 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:28.894 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:28.894 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:28.894 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:28.894 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:28.894 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:28.894 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:28.894 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:28.894 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:28.894 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:28.894 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:28.894 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:28.894 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:28.894 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:28.894 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:28.894 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:28.894 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:28.894 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:28.894 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:28.894 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:28.894 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:28.894 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:28.894 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:28.894 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:28.894 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:28.894 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:28.894 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:28.894 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:28.894 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:28.894 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:28.894 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:28.894 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324001 00:17:28.894 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:28.894 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.894 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.466 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.466 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324001 00:17:29.466 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:29.466 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.466 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.728 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.728 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324001 00:17:29.728 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:29.728 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.728 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.990 07:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.990 07:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324001 00:17:29.990 07:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:29.990 07:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.990 07:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:30.251 07:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.251 07:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324001 00:17:30.251 07:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:30.251 07:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.251 07:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:30.823 07:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.823 07:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324001 00:17:30.823 07:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:30.823 07:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.823 07:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:31.084 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.084 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324001 00:17:31.084 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:31.084 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.084 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:31.344 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.344 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324001 00:17:31.344 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:31.344 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.344 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:31.605 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.605 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324001 00:17:31.605 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:31.605 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.605 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:31.865 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.865 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324001 00:17:31.865 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:31.865 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.865 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.434 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.434 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324001 00:17:32.434 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:32.434 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.434 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.696 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.696 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324001 00:17:32.696 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:32.696 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.697 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.957 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.957 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324001 00:17:32.957 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:32.957 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.957 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:33.217 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.217 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324001 00:17:33.217 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:33.217 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.217 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:33.478 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.478 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324001 00:17:33.478 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:33.478 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.478 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:34.048 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.048 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324001 00:17:34.048 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:34.048 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.048 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:34.308 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.308 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324001 00:17:34.308 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:34.308 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.308 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:34.568 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.568 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324001 00:17:34.568 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:34.568 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.568 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:34.828 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.828 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324001 00:17:34.828 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:34.828 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.828 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:35.089 07:12:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.089 07:12:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324001 00:17:35.089 07:12:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:35.089 07:12:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.089 07:12:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:35.659 07:12:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.659 07:12:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324001 00:17:35.659 07:12:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:35.659 07:12:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.659 07:12:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:35.920 07:12:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.920 07:12:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324001 00:17:35.920 07:12:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:35.920 07:12:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.920 07:12:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:36.182 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.182 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324001 00:17:36.182 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:36.182 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.182 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:36.450 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.450 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324001 00:17:36.450 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:36.450 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.450 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:36.712 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.712 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324001 00:17:36.712 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:36.712 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.712 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:37.283 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.283 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324001 00:17:37.283 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:37.283 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.283 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:37.544 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.544 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324001 00:17:37.544 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:37.544 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.544 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:37.804 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.804 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324001 00:17:37.804 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:37.804 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.804 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:38.065 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.065 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324001 00:17:38.065 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:38.065 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.065 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:38.325 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.325 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324001 00:17:38.325 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:38.325 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.325 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:38.896 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.896 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324001 00:17:38.896 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:38.896 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.896 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:39.157 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:39.157 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.157 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324001 00:17:39.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2324001) - No such process 00:17:39.157 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2324001 00:17:39.157 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:39.157 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:39.157 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:39.157 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:39.157 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:39.157 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:39.157 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:39.157 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:39.157 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:39.157 rmmod nvme_tcp 00:17:39.157 rmmod nvme_fabrics 00:17:39.157 rmmod nvme_keyring 00:17:39.157 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:39.157 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:39.157 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:39.157 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2323867 ']' 00:17:39.157 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2323867 00:17:39.157 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2323867 ']' 00:17:39.157 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2323867 00:17:39.157 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:17:39.157 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:39.157 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2323867 00:17:39.157 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:39.157 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:39.157 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2323867' 00:17:39.157 killing process with pid 2323867 00:17:39.157 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2323867 00:17:39.157 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2323867 00:17:39.419 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:39.419 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:39.419 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:39.419 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:39.419 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:17:39.419 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:39.419 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:17:39.419 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:39.420 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:39.420 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.420 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:39.420 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.334 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:41.334 00:17:41.334 real 0m21.291s 00:17:41.334 user 0m42.163s 00:17:41.334 sys 0m9.372s 00:17:41.334 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:41.334 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:41.334 ************************************ 00:17:41.334 END TEST nvmf_connect_stress 00:17:41.334 ************************************ 00:17:41.595 07:12:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:41.595 07:12:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:41.595 07:12:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:41.595 07:12:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:41.595 ************************************ 00:17:41.595 START TEST nvmf_fused_ordering 00:17:41.595 ************************************ 00:17:41.595 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:41.595 * Looking for test storage... 00:17:41.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:41.595 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:41.595 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:17:41.595 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:41.595 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:41.595 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:41.595 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:41.596 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:41.596 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:41.596 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:41.596 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:41.596 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:41.596 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:41.596 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:41.596 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:41.596 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:41.596 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:41.596 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:41.596 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:41.596 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:41.596 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:41.596 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:41.596 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:41.596 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:41.596 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:41.596 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:41.596 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:41.596 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:41.596 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:41.596 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:41.596 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:41.596 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:41.596 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:41.596 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:41.596 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:41.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.596 --rc genhtml_branch_coverage=1 00:17:41.596 --rc genhtml_function_coverage=1 00:17:41.596 --rc genhtml_legend=1 00:17:41.596 --rc geninfo_all_blocks=1 00:17:41.596 --rc geninfo_unexecuted_blocks=1 00:17:41.596 00:17:41.596 ' 00:17:41.596 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:41.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.596 --rc genhtml_branch_coverage=1 00:17:41.596 --rc genhtml_function_coverage=1 00:17:41.596 --rc genhtml_legend=1 00:17:41.596 --rc geninfo_all_blocks=1 00:17:41.596 --rc geninfo_unexecuted_blocks=1 00:17:41.596 00:17:41.596 ' 00:17:41.596 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:41.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.596 --rc genhtml_branch_coverage=1 00:17:41.596 --rc genhtml_function_coverage=1 00:17:41.596 --rc genhtml_legend=1 00:17:41.596 --rc geninfo_all_blocks=1 00:17:41.596 --rc geninfo_unexecuted_blocks=1 00:17:41.596 00:17:41.596 ' 00:17:41.596 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:41.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.596 --rc genhtml_branch_coverage=1 00:17:41.596 --rc genhtml_function_coverage=1 00:17:41.596 --rc genhtml_legend=1 00:17:41.596 --rc geninfo_all_blocks=1 00:17:41.596 --rc geninfo_unexecuted_blocks=1 00:17:41.596 00:17:41.596 ' 00:17:41.596 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:41.858 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:41.858 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:41.858 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:41.858 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:41.858 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:41.858 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:41.858 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:41.858 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:41.858 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:41.858 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:41.858 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:41.858 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:41.858 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:41.858 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:41.858 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:41.858 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:41.858 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:41.858 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:41.858 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:41.858 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:41.858 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:41.858 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:41.858 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.858 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.858 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.858 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:41.858 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.858 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:41.859 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:41.859 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:41.859 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:41.859 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:41.859 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:41.859 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:41.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:41.859 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:41.859 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:41.859 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:41.859 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:41.859 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:41.859 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:41.859 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:41.859 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:41.859 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:41.859 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.859 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:41.859 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.859 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:41.859 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:41.859 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:41.859 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:50.001 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:50.001 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:50.001 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:50.001 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:50.001 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:50.001 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:50.001 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:50.001 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:50.001 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:50.001 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:50.001 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:50.001 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:50.001 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:50.001 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:50.001 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:50.001 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:50.001 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:50.001 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:50.001 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:50.001 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:50.001 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:50.001 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:50.001 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:50.001 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:50.001 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:50.001 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:50.001 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:50.001 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:50.001 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:50.001 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:50.001 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:50.001 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:50.001 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:50.001 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:50.001 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:50.001 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:50.001 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:50.001 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:50.001 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:50.001 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:50.001 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:50.001 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:50.001 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:50.001 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:50.001 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:50.001 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:50.001 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:50.001 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:50.001 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:50.001 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:50.001 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:50.001 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:50.001 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:50.001 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.001 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:50.001 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:50.001 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:50.001 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:50.001 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.001 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:50.001 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:50.001 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.001 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:50.002 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:50.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:50.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.592 ms 00:17:50.002 00:17:50.002 --- 10.0.0.2 ping statistics --- 00:17:50.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.002 rtt min/avg/max/mdev = 0.592/0.592/0.592/0.000 ms 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:50.002 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:50.002 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:17:50.002 00:17:50.002 --- 10.0.0.1 ping statistics --- 00:17:50.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.002 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2330261 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2330261 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2330261 ']' 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:50.002 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:50.002 [2024-11-27 07:13:00.433046] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:17:50.002 [2024-11-27 07:13:00.433111] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:50.002 [2024-11-27 07:13:00.532877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.002 [2024-11-27 07:13:00.583122] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:50.002 [2024-11-27 07:13:00.583186] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:50.002 [2024-11-27 07:13:00.583195] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:50.002 [2024-11-27 07:13:00.583202] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:50.002 [2024-11-27 07:13:00.583209] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:50.002 [2024-11-27 07:13:00.584024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:50.264 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:50.264 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:17:50.264 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:50.264 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:50.264 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:50.264 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:50.264 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:50.264 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.264 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:50.264 [2024-11-27 07:13:01.311073] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:50.264 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.264 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:50.264 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.264 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:50.264 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.264 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:50.264 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.264 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:50.264 [2024-11-27 07:13:01.335392] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:50.264 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.264 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:50.264 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.264 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:50.264 NULL1 00:17:50.264 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.264 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:50.264 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.264 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:50.264 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.264 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:50.264 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.264 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:50.264 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.264 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:50.264 [2024-11-27 07:13:01.405116] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:17:50.264 [2024-11-27 07:13:01.405176] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2330483 ] 00:17:50.836 Attached to nqn.2016-06.io.spdk:cnode1 00:17:50.836 Namespace ID: 1 size: 1GB 00:17:50.836 fused_ordering(0) 00:17:50.836 fused_ordering(1) 00:17:50.836 fused_ordering(2) 00:17:50.836 fused_ordering(3) 00:17:50.836 fused_ordering(4) 00:17:50.836 fused_ordering(5) 00:17:50.836 fused_ordering(6) 00:17:50.836 fused_ordering(7) 00:17:50.836 fused_ordering(8) 00:17:50.836 fused_ordering(9) 00:17:50.836 fused_ordering(10) 00:17:50.836 fused_ordering(11) 00:17:50.836 fused_ordering(12) 00:17:50.836 fused_ordering(13) 00:17:50.836 fused_ordering(14) 00:17:50.836 fused_ordering(15) 00:17:50.836 fused_ordering(16) 00:17:50.836 fused_ordering(17) 00:17:50.836 fused_ordering(18) 00:17:50.836 fused_ordering(19) 00:17:50.836 fused_ordering(20) 00:17:50.836 fused_ordering(21) 00:17:50.836 fused_ordering(22) 00:17:50.836 fused_ordering(23) 00:17:50.836 fused_ordering(24) 00:17:50.836 fused_ordering(25) 00:17:50.836 fused_ordering(26) 00:17:50.836 fused_ordering(27) 00:17:50.836 fused_ordering(28) 00:17:50.836 fused_ordering(29) 00:17:50.836 fused_ordering(30) 00:17:50.836 fused_ordering(31) 00:17:50.836 fused_ordering(32) 00:17:50.836 fused_ordering(33) 00:17:50.836 fused_ordering(34) 00:17:50.836 fused_ordering(35) 00:17:50.836 fused_ordering(36) 00:17:50.836 fused_ordering(37) 00:17:50.836 fused_ordering(38) 00:17:50.836 fused_ordering(39) 00:17:50.836 fused_ordering(40) 00:17:50.836 fused_ordering(41) 00:17:50.836 fused_ordering(42) 00:17:50.836 fused_ordering(43) 00:17:50.836 fused_ordering(44) 00:17:50.836 fused_ordering(45) 00:17:50.836 fused_ordering(46) 00:17:50.836 fused_ordering(47) 00:17:50.836 fused_ordering(48) 00:17:50.836 fused_ordering(49) 00:17:50.836 fused_ordering(50) 00:17:50.836 fused_ordering(51) 00:17:50.836 fused_ordering(52) 00:17:50.836 fused_ordering(53) 00:17:50.836 fused_ordering(54) 00:17:50.836 fused_ordering(55) 00:17:50.836 fused_ordering(56) 00:17:50.836 fused_ordering(57) 00:17:50.836 fused_ordering(58) 00:17:50.836 fused_ordering(59) 00:17:50.836 fused_ordering(60) 00:17:50.836 fused_ordering(61) 00:17:50.836 fused_ordering(62) 00:17:50.836 fused_ordering(63) 00:17:50.836 fused_ordering(64) 00:17:50.836 fused_ordering(65) 00:17:50.836 fused_ordering(66) 00:17:50.836 fused_ordering(67) 00:17:50.836 fused_ordering(68) 00:17:50.836 fused_ordering(69) 00:17:50.836 fused_ordering(70) 00:17:50.836 fused_ordering(71) 00:17:50.836 fused_ordering(72) 00:17:50.836 fused_ordering(73) 00:17:50.836 fused_ordering(74) 00:17:50.836 fused_ordering(75) 00:17:50.836 fused_ordering(76) 00:17:50.836 fused_ordering(77) 00:17:50.836 fused_ordering(78) 00:17:50.836 fused_ordering(79) 00:17:50.836 fused_ordering(80) 00:17:50.836 fused_ordering(81) 00:17:50.836 fused_ordering(82) 00:17:50.836 fused_ordering(83) 00:17:50.836 fused_ordering(84) 00:17:50.836 fused_ordering(85) 00:17:50.836 fused_ordering(86) 00:17:50.836 fused_ordering(87) 00:17:50.836 fused_ordering(88) 00:17:50.836 fused_ordering(89) 00:17:50.836 fused_ordering(90) 00:17:50.836 fused_ordering(91) 00:17:50.836 fused_ordering(92) 00:17:50.836 fused_ordering(93) 00:17:50.836 fused_ordering(94) 00:17:50.836 fused_ordering(95) 00:17:50.836 fused_ordering(96) 00:17:50.836 fused_ordering(97) 00:17:50.836 fused_ordering(98) 00:17:50.836 fused_ordering(99) 00:17:50.836 fused_ordering(100) 00:17:50.836 fused_ordering(101) 00:17:50.836 fused_ordering(102) 00:17:50.836 fused_ordering(103) 00:17:50.836 fused_ordering(104) 00:17:50.836 fused_ordering(105) 00:17:50.836 fused_ordering(106) 00:17:50.836 fused_ordering(107) 00:17:50.836 fused_ordering(108) 00:17:50.836 fused_ordering(109) 00:17:50.836 fused_ordering(110) 00:17:50.836 fused_ordering(111) 00:17:50.836 fused_ordering(112) 00:17:50.836 fused_ordering(113) 00:17:50.836 fused_ordering(114) 00:17:50.836 fused_ordering(115) 00:17:50.836 fused_ordering(116) 00:17:50.836 fused_ordering(117) 00:17:50.836 fused_ordering(118) 00:17:50.836 fused_ordering(119) 00:17:50.836 fused_ordering(120) 00:17:50.836 fused_ordering(121) 00:17:50.836 fused_ordering(122) 00:17:50.836 fused_ordering(123) 00:17:50.836 fused_ordering(124) 00:17:50.836 fused_ordering(125) 00:17:50.836 fused_ordering(126) 00:17:50.836 fused_ordering(127) 00:17:50.836 fused_ordering(128) 00:17:50.836 fused_ordering(129) 00:17:50.836 fused_ordering(130) 00:17:50.836 fused_ordering(131) 00:17:50.836 fused_ordering(132) 00:17:50.836 fused_ordering(133) 00:17:50.836 fused_ordering(134) 00:17:50.836 fused_ordering(135) 00:17:50.836 fused_ordering(136) 00:17:50.836 fused_ordering(137) 00:17:50.836 fused_ordering(138) 00:17:50.836 fused_ordering(139) 00:17:50.836 fused_ordering(140) 00:17:50.836 fused_ordering(141) 00:17:50.836 fused_ordering(142) 00:17:50.836 fused_ordering(143) 00:17:50.836 fused_ordering(144) 00:17:50.836 fused_ordering(145) 00:17:50.836 fused_ordering(146) 00:17:50.836 fused_ordering(147) 00:17:50.836 fused_ordering(148) 00:17:50.836 fused_ordering(149) 00:17:50.836 fused_ordering(150) 00:17:50.836 fused_ordering(151) 00:17:50.836 fused_ordering(152) 00:17:50.836 fused_ordering(153) 00:17:50.836 fused_ordering(154) 00:17:50.836 fused_ordering(155) 00:17:50.836 fused_ordering(156) 00:17:50.836 fused_ordering(157) 00:17:50.836 fused_ordering(158) 00:17:50.836 fused_ordering(159) 00:17:50.836 fused_ordering(160) 00:17:50.836 fused_ordering(161) 00:17:50.836 fused_ordering(162) 00:17:50.836 fused_ordering(163) 00:17:50.836 fused_ordering(164) 00:17:50.836 fused_ordering(165) 00:17:50.836 fused_ordering(166) 00:17:50.836 fused_ordering(167) 00:17:50.836 fused_ordering(168) 00:17:50.836 fused_ordering(169) 00:17:50.836 fused_ordering(170) 00:17:50.836 fused_ordering(171) 00:17:50.836 fused_ordering(172) 00:17:50.836 fused_ordering(173) 00:17:50.836 fused_ordering(174) 00:17:50.836 fused_ordering(175) 00:17:50.836 fused_ordering(176) 00:17:50.836 fused_ordering(177) 00:17:50.836 fused_ordering(178) 00:17:50.836 fused_ordering(179) 00:17:50.836 fused_ordering(180) 00:17:50.836 fused_ordering(181) 00:17:50.836 fused_ordering(182) 00:17:50.836 fused_ordering(183) 00:17:50.836 fused_ordering(184) 00:17:50.836 fused_ordering(185) 00:17:50.836 fused_ordering(186) 00:17:50.836 fused_ordering(187) 00:17:50.836 fused_ordering(188) 00:17:50.836 fused_ordering(189) 00:17:50.836 fused_ordering(190) 00:17:50.836 fused_ordering(191) 00:17:50.836 fused_ordering(192) 00:17:50.836 fused_ordering(193) 00:17:50.836 fused_ordering(194) 00:17:50.836 fused_ordering(195) 00:17:50.836 fused_ordering(196) 00:17:50.836 fused_ordering(197) 00:17:50.836 fused_ordering(198) 00:17:50.836 fused_ordering(199) 00:17:50.836 fused_ordering(200) 00:17:50.836 fused_ordering(201) 00:17:50.836 fused_ordering(202) 00:17:50.836 fused_ordering(203) 00:17:50.836 fused_ordering(204) 00:17:50.836 fused_ordering(205) 00:17:51.098 fused_ordering(206) 00:17:51.098 fused_ordering(207) 00:17:51.098 fused_ordering(208) 00:17:51.098 fused_ordering(209) 00:17:51.098 fused_ordering(210) 00:17:51.098 fused_ordering(211) 00:17:51.098 fused_ordering(212) 00:17:51.098 fused_ordering(213) 00:17:51.098 fused_ordering(214) 00:17:51.098 fused_ordering(215) 00:17:51.098 fused_ordering(216) 00:17:51.098 fused_ordering(217) 00:17:51.098 fused_ordering(218) 00:17:51.098 fused_ordering(219) 00:17:51.098 fused_ordering(220) 00:17:51.098 fused_ordering(221) 00:17:51.098 fused_ordering(222) 00:17:51.098 fused_ordering(223) 00:17:51.098 fused_ordering(224) 00:17:51.098 fused_ordering(225) 00:17:51.098 fused_ordering(226) 00:17:51.098 fused_ordering(227) 00:17:51.098 fused_ordering(228) 00:17:51.098 fused_ordering(229) 00:17:51.098 fused_ordering(230) 00:17:51.098 fused_ordering(231) 00:17:51.098 fused_ordering(232) 00:17:51.098 fused_ordering(233) 00:17:51.098 fused_ordering(234) 00:17:51.098 fused_ordering(235) 00:17:51.098 fused_ordering(236) 00:17:51.098 fused_ordering(237) 00:17:51.098 fused_ordering(238) 00:17:51.098 fused_ordering(239) 00:17:51.098 fused_ordering(240) 00:17:51.098 fused_ordering(241) 00:17:51.098 fused_ordering(242) 00:17:51.098 fused_ordering(243) 00:17:51.098 fused_ordering(244) 00:17:51.098 fused_ordering(245) 00:17:51.098 fused_ordering(246) 00:17:51.098 fused_ordering(247) 00:17:51.098 fused_ordering(248) 00:17:51.098 fused_ordering(249) 00:17:51.098 fused_ordering(250) 00:17:51.098 fused_ordering(251) 00:17:51.098 fused_ordering(252) 00:17:51.098 fused_ordering(253) 00:17:51.098 fused_ordering(254) 00:17:51.098 fused_ordering(255) 00:17:51.098 fused_ordering(256) 00:17:51.098 fused_ordering(257) 00:17:51.098 fused_ordering(258) 00:17:51.098 fused_ordering(259) 00:17:51.098 fused_ordering(260) 00:17:51.098 fused_ordering(261) 00:17:51.098 fused_ordering(262) 00:17:51.098 fused_ordering(263) 00:17:51.098 fused_ordering(264) 00:17:51.098 fused_ordering(265) 00:17:51.098 fused_ordering(266) 00:17:51.098 fused_ordering(267) 00:17:51.098 fused_ordering(268) 00:17:51.098 fused_ordering(269) 00:17:51.098 fused_ordering(270) 00:17:51.098 fused_ordering(271) 00:17:51.098 fused_ordering(272) 00:17:51.098 fused_ordering(273) 00:17:51.098 fused_ordering(274) 00:17:51.098 fused_ordering(275) 00:17:51.098 fused_ordering(276) 00:17:51.098 fused_ordering(277) 00:17:51.098 fused_ordering(278) 00:17:51.098 fused_ordering(279) 00:17:51.098 fused_ordering(280) 00:17:51.098 fused_ordering(281) 00:17:51.098 fused_ordering(282) 00:17:51.098 fused_ordering(283) 00:17:51.098 fused_ordering(284) 00:17:51.098 fused_ordering(285) 00:17:51.098 fused_ordering(286) 00:17:51.098 fused_ordering(287) 00:17:51.098 fused_ordering(288) 00:17:51.098 fused_ordering(289) 00:17:51.098 fused_ordering(290) 00:17:51.098 fused_ordering(291) 00:17:51.098 fused_ordering(292) 00:17:51.098 fused_ordering(293) 00:17:51.098 fused_ordering(294) 00:17:51.098 fused_ordering(295) 00:17:51.098 fused_ordering(296) 00:17:51.098 fused_ordering(297) 00:17:51.098 fused_ordering(298) 00:17:51.098 fused_ordering(299) 00:17:51.098 fused_ordering(300) 00:17:51.098 fused_ordering(301) 00:17:51.098 fused_ordering(302) 00:17:51.098 fused_ordering(303) 00:17:51.098 fused_ordering(304) 00:17:51.098 fused_ordering(305) 00:17:51.098 fused_ordering(306) 00:17:51.098 fused_ordering(307) 00:17:51.098 fused_ordering(308) 00:17:51.098 fused_ordering(309) 00:17:51.098 fused_ordering(310) 00:17:51.098 fused_ordering(311) 00:17:51.098 fused_ordering(312) 00:17:51.098 fused_ordering(313) 00:17:51.098 fused_ordering(314) 00:17:51.098 fused_ordering(315) 00:17:51.098 fused_ordering(316) 00:17:51.098 fused_ordering(317) 00:17:51.098 fused_ordering(318) 00:17:51.098 fused_ordering(319) 00:17:51.098 fused_ordering(320) 00:17:51.098 fused_ordering(321) 00:17:51.098 fused_ordering(322) 00:17:51.098 fused_ordering(323) 00:17:51.098 fused_ordering(324) 00:17:51.098 fused_ordering(325) 00:17:51.098 fused_ordering(326) 00:17:51.098 fused_ordering(327) 00:17:51.098 fused_ordering(328) 00:17:51.098 fused_ordering(329) 00:17:51.098 fused_ordering(330) 00:17:51.098 fused_ordering(331) 00:17:51.098 fused_ordering(332) 00:17:51.098 fused_ordering(333) 00:17:51.098 fused_ordering(334) 00:17:51.098 fused_ordering(335) 00:17:51.098 fused_ordering(336) 00:17:51.098 fused_ordering(337) 00:17:51.098 fused_ordering(338) 00:17:51.098 fused_ordering(339) 00:17:51.098 fused_ordering(340) 00:17:51.098 fused_ordering(341) 00:17:51.098 fused_ordering(342) 00:17:51.098 fused_ordering(343) 00:17:51.098 fused_ordering(344) 00:17:51.098 fused_ordering(345) 00:17:51.098 fused_ordering(346) 00:17:51.098 fused_ordering(347) 00:17:51.098 fused_ordering(348) 00:17:51.098 fused_ordering(349) 00:17:51.098 fused_ordering(350) 00:17:51.098 fused_ordering(351) 00:17:51.098 fused_ordering(352) 00:17:51.098 fused_ordering(353) 00:17:51.098 fused_ordering(354) 00:17:51.098 fused_ordering(355) 00:17:51.098 fused_ordering(356) 00:17:51.098 fused_ordering(357) 00:17:51.098 fused_ordering(358) 00:17:51.098 fused_ordering(359) 00:17:51.098 fused_ordering(360) 00:17:51.098 fused_ordering(361) 00:17:51.098 fused_ordering(362) 00:17:51.098 fused_ordering(363) 00:17:51.098 fused_ordering(364) 00:17:51.098 fused_ordering(365) 00:17:51.098 fused_ordering(366) 00:17:51.098 fused_ordering(367) 00:17:51.098 fused_ordering(368) 00:17:51.098 fused_ordering(369) 00:17:51.098 fused_ordering(370) 00:17:51.098 fused_ordering(371) 00:17:51.098 fused_ordering(372) 00:17:51.098 fused_ordering(373) 00:17:51.098 fused_ordering(374) 00:17:51.099 fused_ordering(375) 00:17:51.099 fused_ordering(376) 00:17:51.099 fused_ordering(377) 00:17:51.099 fused_ordering(378) 00:17:51.099 fused_ordering(379) 00:17:51.099 fused_ordering(380) 00:17:51.099 fused_ordering(381) 00:17:51.099 fused_ordering(382) 00:17:51.099 fused_ordering(383) 00:17:51.099 fused_ordering(384) 00:17:51.099 fused_ordering(385) 00:17:51.099 fused_ordering(386) 00:17:51.099 fused_ordering(387) 00:17:51.099 fused_ordering(388) 00:17:51.099 fused_ordering(389) 00:17:51.099 fused_ordering(390) 00:17:51.099 fused_ordering(391) 00:17:51.099 fused_ordering(392) 00:17:51.099 fused_ordering(393) 00:17:51.099 fused_ordering(394) 00:17:51.099 fused_ordering(395) 00:17:51.099 fused_ordering(396) 00:17:51.099 fused_ordering(397) 00:17:51.099 fused_ordering(398) 00:17:51.099 fused_ordering(399) 00:17:51.099 fused_ordering(400) 00:17:51.099 fused_ordering(401) 00:17:51.099 fused_ordering(402) 00:17:51.099 fused_ordering(403) 00:17:51.099 fused_ordering(404) 00:17:51.099 fused_ordering(405) 00:17:51.099 fused_ordering(406) 00:17:51.099 fused_ordering(407) 00:17:51.099 fused_ordering(408) 00:17:51.099 fused_ordering(409) 00:17:51.099 fused_ordering(410) 00:17:51.671 fused_ordering(411) 00:17:51.671 fused_ordering(412) 00:17:51.671 fused_ordering(413) 00:17:51.671 fused_ordering(414) 00:17:51.671 fused_ordering(415) 00:17:51.671 fused_ordering(416) 00:17:51.671 fused_ordering(417) 00:17:51.671 fused_ordering(418) 00:17:51.671 fused_ordering(419) 00:17:51.671 fused_ordering(420) 00:17:51.671 fused_ordering(421) 00:17:51.671 fused_ordering(422) 00:17:51.671 fused_ordering(423) 00:17:51.671 fused_ordering(424) 00:17:51.671 fused_ordering(425) 00:17:51.671 fused_ordering(426) 00:17:51.671 fused_ordering(427) 00:17:51.671 fused_ordering(428) 00:17:51.671 fused_ordering(429) 00:17:51.671 fused_ordering(430) 00:17:51.671 fused_ordering(431) 00:17:51.671 fused_ordering(432) 00:17:51.671 fused_ordering(433) 00:17:51.671 fused_ordering(434) 00:17:51.671 fused_ordering(435) 00:17:51.671 fused_ordering(436) 00:17:51.671 fused_ordering(437) 00:17:51.671 fused_ordering(438) 00:17:51.671 fused_ordering(439) 00:17:51.671 fused_ordering(440) 00:17:51.671 fused_ordering(441) 00:17:51.671 fused_ordering(442) 00:17:51.671 fused_ordering(443) 00:17:51.671 fused_ordering(444) 00:17:51.671 fused_ordering(445) 00:17:51.671 fused_ordering(446) 00:17:51.671 fused_ordering(447) 00:17:51.671 fused_ordering(448) 00:17:51.671 fused_ordering(449) 00:17:51.671 fused_ordering(450) 00:17:51.671 fused_ordering(451) 00:17:51.671 fused_ordering(452) 00:17:51.671 fused_ordering(453) 00:17:51.671 fused_ordering(454) 00:17:51.671 fused_ordering(455) 00:17:51.671 fused_ordering(456) 00:17:51.671 fused_ordering(457) 00:17:51.671 fused_ordering(458) 00:17:51.671 fused_ordering(459) 00:17:51.671 fused_ordering(460) 00:17:51.671 fused_ordering(461) 00:17:51.671 fused_ordering(462) 00:17:51.671 fused_ordering(463) 00:17:51.671 fused_ordering(464) 00:17:51.671 fused_ordering(465) 00:17:51.671 fused_ordering(466) 00:17:51.671 fused_ordering(467) 00:17:51.671 fused_ordering(468) 00:17:51.671 fused_ordering(469) 00:17:51.671 fused_ordering(470) 00:17:51.671 fused_ordering(471) 00:17:51.671 fused_ordering(472) 00:17:51.671 fused_ordering(473) 00:17:51.671 fused_ordering(474) 00:17:51.671 fused_ordering(475) 00:17:51.671 fused_ordering(476) 00:17:51.671 fused_ordering(477) 00:17:51.671 fused_ordering(478) 00:17:51.671 fused_ordering(479) 00:17:51.671 fused_ordering(480) 00:17:51.671 fused_ordering(481) 00:17:51.671 fused_ordering(482) 00:17:51.671 fused_ordering(483) 00:17:51.671 fused_ordering(484) 00:17:51.671 fused_ordering(485) 00:17:51.671 fused_ordering(486) 00:17:51.671 fused_ordering(487) 00:17:51.671 fused_ordering(488) 00:17:51.671 fused_ordering(489) 00:17:51.671 fused_ordering(490) 00:17:51.671 fused_ordering(491) 00:17:51.671 fused_ordering(492) 00:17:51.671 fused_ordering(493) 00:17:51.671 fused_ordering(494) 00:17:51.671 fused_ordering(495) 00:17:51.671 fused_ordering(496) 00:17:51.671 fused_ordering(497) 00:17:51.671 fused_ordering(498) 00:17:51.671 fused_ordering(499) 00:17:51.671 fused_ordering(500) 00:17:51.671 fused_ordering(501) 00:17:51.671 fused_ordering(502) 00:17:51.671 fused_ordering(503) 00:17:51.671 fused_ordering(504) 00:17:51.671 fused_ordering(505) 00:17:51.671 fused_ordering(506) 00:17:51.671 fused_ordering(507) 00:17:51.671 fused_ordering(508) 00:17:51.671 fused_ordering(509) 00:17:51.671 fused_ordering(510) 00:17:51.672 fused_ordering(511) 00:17:51.672 fused_ordering(512) 00:17:51.672 fused_ordering(513) 00:17:51.672 fused_ordering(514) 00:17:51.672 fused_ordering(515) 00:17:51.672 fused_ordering(516) 00:17:51.672 fused_ordering(517) 00:17:51.672 fused_ordering(518) 00:17:51.672 fused_ordering(519) 00:17:51.672 fused_ordering(520) 00:17:51.672 fused_ordering(521) 00:17:51.672 fused_ordering(522) 00:17:51.672 fused_ordering(523) 00:17:51.672 fused_ordering(524) 00:17:51.672 fused_ordering(525) 00:17:51.672 fused_ordering(526) 00:17:51.672 fused_ordering(527) 00:17:51.672 fused_ordering(528) 00:17:51.672 fused_ordering(529) 00:17:51.672 fused_ordering(530) 00:17:51.672 fused_ordering(531) 00:17:51.672 fused_ordering(532) 00:17:51.672 fused_ordering(533) 00:17:51.672 fused_ordering(534) 00:17:51.672 fused_ordering(535) 00:17:51.672 fused_ordering(536) 00:17:51.672 fused_ordering(537) 00:17:51.672 fused_ordering(538) 00:17:51.672 fused_ordering(539) 00:17:51.672 fused_ordering(540) 00:17:51.672 fused_ordering(541) 00:17:51.672 fused_ordering(542) 00:17:51.672 fused_ordering(543) 00:17:51.672 fused_ordering(544) 00:17:51.672 fused_ordering(545) 00:17:51.672 fused_ordering(546) 00:17:51.672 fused_ordering(547) 00:17:51.672 fused_ordering(548) 00:17:51.672 fused_ordering(549) 00:17:51.672 fused_ordering(550) 00:17:51.672 fused_ordering(551) 00:17:51.672 fused_ordering(552) 00:17:51.672 fused_ordering(553) 00:17:51.672 fused_ordering(554) 00:17:51.672 fused_ordering(555) 00:17:51.672 fused_ordering(556) 00:17:51.672 fused_ordering(557) 00:17:51.672 fused_ordering(558) 00:17:51.672 fused_ordering(559) 00:17:51.672 fused_ordering(560) 00:17:51.672 fused_ordering(561) 00:17:51.672 fused_ordering(562) 00:17:51.672 fused_ordering(563) 00:17:51.672 fused_ordering(564) 00:17:51.672 fused_ordering(565) 00:17:51.672 fused_ordering(566) 00:17:51.672 fused_ordering(567) 00:17:51.672 fused_ordering(568) 00:17:51.672 fused_ordering(569) 00:17:51.672 fused_ordering(570) 00:17:51.672 fused_ordering(571) 00:17:51.672 fused_ordering(572) 00:17:51.672 fused_ordering(573) 00:17:51.672 fused_ordering(574) 00:17:51.672 fused_ordering(575) 00:17:51.672 fused_ordering(576) 00:17:51.672 fused_ordering(577) 00:17:51.672 fused_ordering(578) 00:17:51.672 fused_ordering(579) 00:17:51.672 fused_ordering(580) 00:17:51.672 fused_ordering(581) 00:17:51.672 fused_ordering(582) 00:17:51.672 fused_ordering(583) 00:17:51.672 fused_ordering(584) 00:17:51.672 fused_ordering(585) 00:17:51.672 fused_ordering(586) 00:17:51.672 fused_ordering(587) 00:17:51.672 fused_ordering(588) 00:17:51.672 fused_ordering(589) 00:17:51.672 fused_ordering(590) 00:17:51.672 fused_ordering(591) 00:17:51.672 fused_ordering(592) 00:17:51.672 fused_ordering(593) 00:17:51.672 fused_ordering(594) 00:17:51.672 fused_ordering(595) 00:17:51.672 fused_ordering(596) 00:17:51.672 fused_ordering(597) 00:17:51.672 fused_ordering(598) 00:17:51.672 fused_ordering(599) 00:17:51.672 fused_ordering(600) 00:17:51.672 fused_ordering(601) 00:17:51.672 fused_ordering(602) 00:17:51.672 fused_ordering(603) 00:17:51.672 fused_ordering(604) 00:17:51.672 fused_ordering(605) 00:17:51.672 fused_ordering(606) 00:17:51.672 fused_ordering(607) 00:17:51.672 fused_ordering(608) 00:17:51.672 fused_ordering(609) 00:17:51.672 fused_ordering(610) 00:17:51.672 fused_ordering(611) 00:17:51.672 fused_ordering(612) 00:17:51.672 fused_ordering(613) 00:17:51.672 fused_ordering(614) 00:17:51.672 fused_ordering(615) 00:17:52.244 fused_ordering(616) 00:17:52.245 fused_ordering(617) 00:17:52.245 fused_ordering(618) 00:17:52.245 fused_ordering(619) 00:17:52.245 fused_ordering(620) 00:17:52.245 fused_ordering(621) 00:17:52.245 fused_ordering(622) 00:17:52.245 fused_ordering(623) 00:17:52.245 fused_ordering(624) 00:17:52.245 fused_ordering(625) 00:17:52.245 fused_ordering(626) 00:17:52.245 fused_ordering(627) 00:17:52.245 fused_ordering(628) 00:17:52.245 fused_ordering(629) 00:17:52.245 fused_ordering(630) 00:17:52.245 fused_ordering(631) 00:17:52.245 fused_ordering(632) 00:17:52.245 fused_ordering(633) 00:17:52.245 fused_ordering(634) 00:17:52.245 fused_ordering(635) 00:17:52.245 fused_ordering(636) 00:17:52.245 fused_ordering(637) 00:17:52.245 fused_ordering(638) 00:17:52.245 fused_ordering(639) 00:17:52.245 fused_ordering(640) 00:17:52.245 fused_ordering(641) 00:17:52.245 fused_ordering(642) 00:17:52.245 fused_ordering(643) 00:17:52.245 fused_ordering(644) 00:17:52.245 fused_ordering(645) 00:17:52.245 fused_ordering(646) 00:17:52.245 fused_ordering(647) 00:17:52.245 fused_ordering(648) 00:17:52.245 fused_ordering(649) 00:17:52.245 fused_ordering(650) 00:17:52.245 fused_ordering(651) 00:17:52.245 fused_ordering(652) 00:17:52.245 fused_ordering(653) 00:17:52.245 fused_ordering(654) 00:17:52.245 fused_ordering(655) 00:17:52.245 fused_ordering(656) 00:17:52.245 fused_ordering(657) 00:17:52.245 fused_ordering(658) 00:17:52.245 fused_ordering(659) 00:17:52.245 fused_ordering(660) 00:17:52.245 fused_ordering(661) 00:17:52.245 fused_ordering(662) 00:17:52.245 fused_ordering(663) 00:17:52.245 fused_ordering(664) 00:17:52.245 fused_ordering(665) 00:17:52.245 fused_ordering(666) 00:17:52.245 fused_ordering(667) 00:17:52.245 fused_ordering(668) 00:17:52.245 fused_ordering(669) 00:17:52.245 fused_ordering(670) 00:17:52.245 fused_ordering(671) 00:17:52.245 fused_ordering(672) 00:17:52.245 fused_ordering(673) 00:17:52.245 fused_ordering(674) 00:17:52.245 fused_ordering(675) 00:17:52.245 fused_ordering(676) 00:17:52.245 fused_ordering(677) 00:17:52.245 fused_ordering(678) 00:17:52.245 fused_ordering(679) 00:17:52.245 fused_ordering(680) 00:17:52.245 fused_ordering(681) 00:17:52.245 fused_ordering(682) 00:17:52.245 fused_ordering(683) 00:17:52.245 fused_ordering(684) 00:17:52.245 fused_ordering(685) 00:17:52.245 fused_ordering(686) 00:17:52.245 fused_ordering(687) 00:17:52.245 fused_ordering(688) 00:17:52.245 fused_ordering(689) 00:17:52.245 fused_ordering(690) 00:17:52.245 fused_ordering(691) 00:17:52.245 fused_ordering(692) 00:17:52.245 fused_ordering(693) 00:17:52.245 fused_ordering(694) 00:17:52.245 fused_ordering(695) 00:17:52.245 fused_ordering(696) 00:17:52.245 fused_ordering(697) 00:17:52.245 fused_ordering(698) 00:17:52.245 fused_ordering(699) 00:17:52.245 fused_ordering(700) 00:17:52.245 fused_ordering(701) 00:17:52.245 fused_ordering(702) 00:17:52.245 fused_ordering(703) 00:17:52.245 fused_ordering(704) 00:17:52.245 fused_ordering(705) 00:17:52.245 fused_ordering(706) 00:17:52.245 fused_ordering(707) 00:17:52.245 fused_ordering(708) 00:17:52.245 fused_ordering(709) 00:17:52.245 fused_ordering(710) 00:17:52.245 fused_ordering(711) 00:17:52.245 fused_ordering(712) 00:17:52.245 fused_ordering(713) 00:17:52.245 fused_ordering(714) 00:17:52.245 fused_ordering(715) 00:17:52.245 fused_ordering(716) 00:17:52.245 fused_ordering(717) 00:17:52.245 fused_ordering(718) 00:17:52.245 fused_ordering(719) 00:17:52.245 fused_ordering(720) 00:17:52.245 fused_ordering(721) 00:17:52.245 fused_ordering(722) 00:17:52.245 fused_ordering(723) 00:17:52.245 fused_ordering(724) 00:17:52.245 fused_ordering(725) 00:17:52.245 fused_ordering(726) 00:17:52.245 fused_ordering(727) 00:17:52.245 fused_ordering(728) 00:17:52.245 fused_ordering(729) 00:17:52.245 fused_ordering(730) 00:17:52.245 fused_ordering(731) 00:17:52.245 fused_ordering(732) 00:17:52.245 fused_ordering(733) 00:17:52.245 fused_ordering(734) 00:17:52.245 fused_ordering(735) 00:17:52.245 fused_ordering(736) 00:17:52.245 fused_ordering(737) 00:17:52.245 fused_ordering(738) 00:17:52.245 fused_ordering(739) 00:17:52.245 fused_ordering(740) 00:17:52.245 fused_ordering(741) 00:17:52.245 fused_ordering(742) 00:17:52.245 fused_ordering(743) 00:17:52.245 fused_ordering(744) 00:17:52.245 fused_ordering(745) 00:17:52.245 fused_ordering(746) 00:17:52.245 fused_ordering(747) 00:17:52.245 fused_ordering(748) 00:17:52.245 fused_ordering(749) 00:17:52.245 fused_ordering(750) 00:17:52.245 fused_ordering(751) 00:17:52.245 fused_ordering(752) 00:17:52.245 fused_ordering(753) 00:17:52.245 fused_ordering(754) 00:17:52.245 fused_ordering(755) 00:17:52.245 fused_ordering(756) 00:17:52.245 fused_ordering(757) 00:17:52.245 fused_ordering(758) 00:17:52.245 fused_ordering(759) 00:17:52.245 fused_ordering(760) 00:17:52.245 fused_ordering(761) 00:17:52.245 fused_ordering(762) 00:17:52.245 fused_ordering(763) 00:17:52.245 fused_ordering(764) 00:17:52.245 fused_ordering(765) 00:17:52.245 fused_ordering(766) 00:17:52.245 fused_ordering(767) 00:17:52.245 fused_ordering(768) 00:17:52.245 fused_ordering(769) 00:17:52.245 fused_ordering(770) 00:17:52.245 fused_ordering(771) 00:17:52.245 fused_ordering(772) 00:17:52.245 fused_ordering(773) 00:17:52.245 fused_ordering(774) 00:17:52.245 fused_ordering(775) 00:17:52.245 fused_ordering(776) 00:17:52.245 fused_ordering(777) 00:17:52.245 fused_ordering(778) 00:17:52.245 fused_ordering(779) 00:17:52.245 fused_ordering(780) 00:17:52.245 fused_ordering(781) 00:17:52.245 fused_ordering(782) 00:17:52.245 fused_ordering(783) 00:17:52.245 fused_ordering(784) 00:17:52.245 fused_ordering(785) 00:17:52.245 fused_ordering(786) 00:17:52.245 fused_ordering(787) 00:17:52.245 fused_ordering(788) 00:17:52.245 fused_ordering(789) 00:17:52.245 fused_ordering(790) 00:17:52.245 fused_ordering(791) 00:17:52.245 fused_ordering(792) 00:17:52.245 fused_ordering(793) 00:17:52.245 fused_ordering(794) 00:17:52.245 fused_ordering(795) 00:17:52.245 fused_ordering(796) 00:17:52.245 fused_ordering(797) 00:17:52.245 fused_ordering(798) 00:17:52.245 fused_ordering(799) 00:17:52.245 fused_ordering(800) 00:17:52.245 fused_ordering(801) 00:17:52.245 fused_ordering(802) 00:17:52.245 fused_ordering(803) 00:17:52.245 fused_ordering(804) 00:17:52.245 fused_ordering(805) 00:17:52.245 fused_ordering(806) 00:17:52.245 fused_ordering(807) 00:17:52.245 fused_ordering(808) 00:17:52.245 fused_ordering(809) 00:17:52.245 fused_ordering(810) 00:17:52.245 fused_ordering(811) 00:17:52.245 fused_ordering(812) 00:17:52.245 fused_ordering(813) 00:17:52.245 fused_ordering(814) 00:17:52.245 fused_ordering(815) 00:17:52.245 fused_ordering(816) 00:17:52.245 fused_ordering(817) 00:17:52.245 fused_ordering(818) 00:17:52.245 fused_ordering(819) 00:17:52.245 fused_ordering(820) 00:17:52.817 fused_ordering(821) 00:17:52.817 fused_ordering(822) 00:17:52.817 fused_ordering(823) 00:17:52.817 fused_ordering(824) 00:17:52.817 fused_ordering(825) 00:17:52.817 fused_ordering(826) 00:17:52.817 fused_ordering(827) 00:17:52.817 fused_ordering(828) 00:17:52.817 fused_ordering(829) 00:17:52.817 fused_ordering(830) 00:17:52.817 fused_ordering(831) 00:17:52.817 fused_ordering(832) 00:17:52.817 fused_ordering(833) 00:17:52.817 fused_ordering(834) 00:17:52.817 fused_ordering(835) 00:17:52.817 fused_ordering(836) 00:17:52.817 fused_ordering(837) 00:17:52.817 fused_ordering(838) 00:17:52.817 fused_ordering(839) 00:17:52.817 fused_ordering(840) 00:17:52.817 fused_ordering(841) 00:17:52.817 fused_ordering(842) 00:17:52.817 fused_ordering(843) 00:17:52.817 fused_ordering(844) 00:17:52.817 fused_ordering(845) 00:17:52.817 fused_ordering(846) 00:17:52.817 fused_ordering(847) 00:17:52.817 fused_ordering(848) 00:17:52.817 fused_ordering(849) 00:17:52.817 fused_ordering(850) 00:17:52.817 fused_ordering(851) 00:17:52.817 fused_ordering(852) 00:17:52.817 fused_ordering(853) 00:17:52.817 fused_ordering(854) 00:17:52.817 fused_ordering(855) 00:17:52.817 fused_ordering(856) 00:17:52.817 fused_ordering(857) 00:17:52.817 fused_ordering(858) 00:17:52.817 fused_ordering(859) 00:17:52.817 fused_ordering(860) 00:17:52.817 fused_ordering(861) 00:17:52.817 fused_ordering(862) 00:17:52.817 fused_ordering(863) 00:17:52.817 fused_ordering(864) 00:17:52.817 fused_ordering(865) 00:17:52.817 fused_ordering(866) 00:17:52.817 fused_ordering(867) 00:17:52.817 fused_ordering(868) 00:17:52.817 fused_ordering(869) 00:17:52.817 fused_ordering(870) 00:17:52.817 fused_ordering(871) 00:17:52.817 fused_ordering(872) 00:17:52.817 fused_ordering(873) 00:17:52.817 fused_ordering(874) 00:17:52.817 fused_ordering(875) 00:17:52.817 fused_ordering(876) 00:17:52.817 fused_ordering(877) 00:17:52.817 fused_ordering(878) 00:17:52.817 fused_ordering(879) 00:17:52.817 fused_ordering(880) 00:17:52.817 fused_ordering(881) 00:17:52.817 fused_ordering(882) 00:17:52.817 fused_ordering(883) 00:17:52.817 fused_ordering(884) 00:17:52.817 fused_ordering(885) 00:17:52.818 fused_ordering(886) 00:17:52.818 fused_ordering(887) 00:17:52.818 fused_ordering(888) 00:17:52.818 fused_ordering(889) 00:17:52.818 fused_ordering(890) 00:17:52.818 fused_ordering(891) 00:17:52.818 fused_ordering(892) 00:17:52.818 fused_ordering(893) 00:17:52.818 fused_ordering(894) 00:17:52.818 fused_ordering(895) 00:17:52.818 fused_ordering(896) 00:17:52.818 fused_ordering(897) 00:17:52.818 fused_ordering(898) 00:17:52.818 fused_ordering(899) 00:17:52.818 fused_ordering(900) 00:17:52.818 fused_ordering(901) 00:17:52.818 fused_ordering(902) 00:17:52.818 fused_ordering(903) 00:17:52.818 fused_ordering(904) 00:17:52.818 fused_ordering(905) 00:17:52.818 fused_ordering(906) 00:17:52.818 fused_ordering(907) 00:17:52.818 fused_ordering(908) 00:17:52.818 fused_ordering(909) 00:17:52.818 fused_ordering(910) 00:17:52.818 fused_ordering(911) 00:17:52.818 fused_ordering(912) 00:17:52.818 fused_ordering(913) 00:17:52.818 fused_ordering(914) 00:17:52.818 fused_ordering(915) 00:17:52.818 fused_ordering(916) 00:17:52.818 fused_ordering(917) 00:17:52.818 fused_ordering(918) 00:17:52.818 fused_ordering(919) 00:17:52.818 fused_ordering(920) 00:17:52.818 fused_ordering(921) 00:17:52.818 fused_ordering(922) 00:17:52.818 fused_ordering(923) 00:17:52.818 fused_ordering(924) 00:17:52.818 fused_ordering(925) 00:17:52.818 fused_ordering(926) 00:17:52.818 fused_ordering(927) 00:17:52.818 fused_ordering(928) 00:17:52.818 fused_ordering(929) 00:17:52.818 fused_ordering(930) 00:17:52.818 fused_ordering(931) 00:17:52.818 fused_ordering(932) 00:17:52.818 fused_ordering(933) 00:17:52.818 fused_ordering(934) 00:17:52.818 fused_ordering(935) 00:17:52.818 fused_ordering(936) 00:17:52.818 fused_ordering(937) 00:17:52.818 fused_ordering(938) 00:17:52.818 fused_ordering(939) 00:17:52.818 fused_ordering(940) 00:17:52.818 fused_ordering(941) 00:17:52.818 fused_ordering(942) 00:17:52.818 fused_ordering(943) 00:17:52.818 fused_ordering(944) 00:17:52.818 fused_ordering(945) 00:17:52.818 fused_ordering(946) 00:17:52.818 fused_ordering(947) 00:17:52.818 fused_ordering(948) 00:17:52.818 fused_ordering(949) 00:17:52.818 fused_ordering(950) 00:17:52.818 fused_ordering(951) 00:17:52.818 fused_ordering(952) 00:17:52.818 fused_ordering(953) 00:17:52.818 fused_ordering(954) 00:17:52.818 fused_ordering(955) 00:17:52.818 fused_ordering(956) 00:17:52.818 fused_ordering(957) 00:17:52.818 fused_ordering(958) 00:17:52.818 fused_ordering(959) 00:17:52.818 fused_ordering(960) 00:17:52.818 fused_ordering(961) 00:17:52.818 fused_ordering(962) 00:17:52.818 fused_ordering(963) 00:17:52.818 fused_ordering(964) 00:17:52.818 fused_ordering(965) 00:17:52.818 fused_ordering(966) 00:17:52.818 fused_ordering(967) 00:17:52.818 fused_ordering(968) 00:17:52.818 fused_ordering(969) 00:17:52.818 fused_ordering(970) 00:17:52.818 fused_ordering(971) 00:17:52.818 fused_ordering(972) 00:17:52.818 fused_ordering(973) 00:17:52.818 fused_ordering(974) 00:17:52.818 fused_ordering(975) 00:17:52.818 fused_ordering(976) 00:17:52.818 fused_ordering(977) 00:17:52.818 fused_ordering(978) 00:17:52.818 fused_ordering(979) 00:17:52.818 fused_ordering(980) 00:17:52.818 fused_ordering(981) 00:17:52.818 fused_ordering(982) 00:17:52.818 fused_ordering(983) 00:17:52.818 fused_ordering(984) 00:17:52.818 fused_ordering(985) 00:17:52.818 fused_ordering(986) 00:17:52.818 fused_ordering(987) 00:17:52.818 fused_ordering(988) 00:17:52.818 fused_ordering(989) 00:17:52.818 fused_ordering(990) 00:17:52.818 fused_ordering(991) 00:17:52.818 fused_ordering(992) 00:17:52.818 fused_ordering(993) 00:17:52.818 fused_ordering(994) 00:17:52.818 fused_ordering(995) 00:17:52.818 fused_ordering(996) 00:17:52.818 fused_ordering(997) 00:17:52.818 fused_ordering(998) 00:17:52.818 fused_ordering(999) 00:17:52.818 fused_ordering(1000) 00:17:52.818 fused_ordering(1001) 00:17:52.818 fused_ordering(1002) 00:17:52.818 fused_ordering(1003) 00:17:52.818 fused_ordering(1004) 00:17:52.818 fused_ordering(1005) 00:17:52.818 fused_ordering(1006) 00:17:52.818 fused_ordering(1007) 00:17:52.818 fused_ordering(1008) 00:17:52.818 fused_ordering(1009) 00:17:52.818 fused_ordering(1010) 00:17:52.818 fused_ordering(1011) 00:17:52.818 fused_ordering(1012) 00:17:52.818 fused_ordering(1013) 00:17:52.818 fused_ordering(1014) 00:17:52.818 fused_ordering(1015) 00:17:52.818 fused_ordering(1016) 00:17:52.818 fused_ordering(1017) 00:17:52.818 fused_ordering(1018) 00:17:52.818 fused_ordering(1019) 00:17:52.818 fused_ordering(1020) 00:17:52.818 fused_ordering(1021) 00:17:52.818 fused_ordering(1022) 00:17:52.818 fused_ordering(1023) 00:17:52.818 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:52.818 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:52.818 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:52.818 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:52.818 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:52.818 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:52.818 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:52.818 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:52.818 rmmod nvme_tcp 00:17:52.818 rmmod nvme_fabrics 00:17:52.818 rmmod nvme_keyring 00:17:52.818 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:52.818 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:52.818 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:52.818 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2330261 ']' 00:17:52.818 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2330261 00:17:52.818 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2330261 ']' 00:17:52.818 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2330261 00:17:52.818 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:17:53.079 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.079 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2330261 00:17:53.079 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:53.079 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:53.079 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2330261' 00:17:53.079 killing process with pid 2330261 00:17:53.079 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2330261 00:17:53.079 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2330261 00:17:53.079 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:53.079 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:53.079 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:53.079 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:53.079 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:17:53.079 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:53.079 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:17:53.079 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:53.079 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:53.079 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.079 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:53.079 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:55.627 00:17:55.627 real 0m13.739s 00:17:55.627 user 0m7.295s 00:17:55.627 sys 0m7.455s 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:55.627 ************************************ 00:17:55.627 END TEST nvmf_fused_ordering 00:17:55.627 ************************************ 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:55.627 ************************************ 00:17:55.627 START TEST nvmf_ns_masking 00:17:55.627 ************************************ 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:55.627 * Looking for test storage... 00:17:55.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:55.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.627 --rc genhtml_branch_coverage=1 00:17:55.627 --rc genhtml_function_coverage=1 00:17:55.627 --rc genhtml_legend=1 00:17:55.627 --rc geninfo_all_blocks=1 00:17:55.627 --rc geninfo_unexecuted_blocks=1 00:17:55.627 00:17:55.627 ' 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:55.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.627 --rc genhtml_branch_coverage=1 00:17:55.627 --rc genhtml_function_coverage=1 00:17:55.627 --rc genhtml_legend=1 00:17:55.627 --rc geninfo_all_blocks=1 00:17:55.627 --rc geninfo_unexecuted_blocks=1 00:17:55.627 00:17:55.627 ' 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:55.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.627 --rc genhtml_branch_coverage=1 00:17:55.627 --rc genhtml_function_coverage=1 00:17:55.627 --rc genhtml_legend=1 00:17:55.627 --rc geninfo_all_blocks=1 00:17:55.627 --rc geninfo_unexecuted_blocks=1 00:17:55.627 00:17:55.627 ' 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:55.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.627 --rc genhtml_branch_coverage=1 00:17:55.627 --rc genhtml_function_coverage=1 00:17:55.627 --rc genhtml_legend=1 00:17:55.627 --rc geninfo_all_blocks=1 00:17:55.627 --rc geninfo_unexecuted_blocks=1 00:17:55.627 00:17:55.627 ' 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:55.627 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:55.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=a4491914-faec-4772-b524-403d43d6c6b1 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=02fc799f-d588-4213-b993-1e55da70ba8e 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=dcc00a04-bd11-4398-bf2f-0eb44deb4b75 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:55.628 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:03.775 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:03.775 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:03.775 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:03.776 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:03.776 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:03.776 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:03.776 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:03.776 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:03.776 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:03.776 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:03.776 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:03.776 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:03.776 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:03.776 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:03.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:03.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:18:03.776 00:18:03.776 --- 10.0.0.2 ping statistics --- 00:18:03.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.776 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:18:03.776 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:03.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:03.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:18:03.776 00:18:03.776 --- 10.0.0.1 ping statistics --- 00:18:03.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.776 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:18:03.776 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:03.776 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:18:03.776 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:03.776 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:03.776 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:03.776 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:03.776 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:03.776 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:03.776 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:03.776 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:18:03.776 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:03.776 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:03.776 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:03.776 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2335288 00:18:03.776 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2335288 00:18:03.776 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:03.776 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2335288 ']' 00:18:03.776 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.776 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:03.776 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.777 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:03.777 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:03.777 [2024-11-27 07:13:14.258739] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:18:03.777 [2024-11-27 07:13:14.258804] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:03.777 [2024-11-27 07:13:14.358142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.777 [2024-11-27 07:13:14.408857] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:03.777 [2024-11-27 07:13:14.408910] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:03.777 [2024-11-27 07:13:14.408919] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:03.777 [2024-11-27 07:13:14.408926] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:03.777 [2024-11-27 07:13:14.408932] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:03.777 [2024-11-27 07:13:14.409712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.038 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:04.038 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:04.038 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:04.038 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:04.038 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:04.038 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:04.038 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:04.299 [2024-11-27 07:13:15.281385] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:04.299 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:18:04.299 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:18:04.299 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:04.560 Malloc1 00:18:04.560 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:04.560 Malloc2 00:18:04.821 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:04.821 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:05.082 07:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:05.344 [2024-11-27 07:13:16.336031] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.344 07:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:05.344 07:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dcc00a04-bd11-4398-bf2f-0eb44deb4b75 -a 10.0.0.2 -s 4420 -i 4 00:18:05.604 07:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:05.604 07:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:05.604 07:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:05.604 07:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:05.604 07:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:07.550 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:07.550 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:07.550 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:07.550 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:07.550 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:07.550 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:07.550 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:07.550 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:07.550 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:07.550 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:07.550 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:07.550 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:07.550 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:07.550 [ 0]:0x1 00:18:07.550 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:07.550 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:07.550 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=25158a937f1f440f960a1764fb63b212 00:18:07.550 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 25158a937f1f440f960a1764fb63b212 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:07.550 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:07.810 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:07.810 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:07.810 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:07.810 [ 0]:0x1 00:18:07.810 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:07.810 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:07.810 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=25158a937f1f440f960a1764fb63b212 00:18:07.810 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 25158a937f1f440f960a1764fb63b212 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:07.810 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:07.810 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:07.810 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:07.810 [ 1]:0x2 00:18:07.810 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:07.810 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:07.810 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2dff64be974a4efbab34b825ba831760 00:18:07.810 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2dff64be974a4efbab34b825ba831760 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:07.810 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:07.810 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:08.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:08.072 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:08.072 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:08.333 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:08.333 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dcc00a04-bd11-4398-bf2f-0eb44deb4b75 -a 10.0.0.2 -s 4420 -i 4 00:18:08.593 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:08.593 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:08.593 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:08.593 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:18:08.593 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:18:08.593 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:10.507 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:10.507 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:10.507 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:10.507 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:10.507 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:10.507 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:10.507 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:10.507 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:10.769 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:10.769 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:10.769 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:18:10.769 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:10.769 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:10.769 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:10.769 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.769 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:10.769 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.769 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:10.769 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:10.769 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:10.769 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:10.769 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:10.769 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:10.769 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:10.769 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:10.769 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:10.769 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:10.769 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:10.769 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:18:10.769 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:10.769 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:10.769 [ 0]:0x2 00:18:10.769 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:10.769 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:10.769 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2dff64be974a4efbab34b825ba831760 00:18:10.769 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2dff64be974a4efbab34b825ba831760 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:10.769 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:11.031 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:18:11.031 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:11.031 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:11.031 [ 0]:0x1 00:18:11.031 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:11.031 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:11.031 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=25158a937f1f440f960a1764fb63b212 00:18:11.031 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 25158a937f1f440f960a1764fb63b212 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:11.031 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:18:11.031 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:11.031 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:11.031 [ 1]:0x2 00:18:11.031 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:11.031 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:11.292 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2dff64be974a4efbab34b825ba831760 00:18:11.292 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2dff64be974a4efbab34b825ba831760 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:11.292 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:11.292 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:18:11.292 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:11.292 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:11.292 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:11.292 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.292 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:11.292 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.292 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:11.292 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:11.292 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:11.292 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:11.292 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:11.292 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:11.292 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:11.292 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:11.292 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:11.292 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:11.292 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:11.292 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:18:11.553 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:11.553 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:11.553 [ 0]:0x2 00:18:11.553 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:11.553 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:11.553 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2dff64be974a4efbab34b825ba831760 00:18:11.553 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2dff64be974a4efbab34b825ba831760 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:11.553 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:18:11.553 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:11.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:11.553 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:11.813 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:18:11.813 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dcc00a04-bd11-4398-bf2f-0eb44deb4b75 -a 10.0.0.2 -s 4420 -i 4 00:18:11.813 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:11.813 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:11.813 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:11.813 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:11.813 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:11.813 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:14.460 [ 0]:0x1 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=25158a937f1f440f960a1764fb63b212 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 25158a937f1f440f960a1764fb63b212 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:14.460 [ 1]:0x2 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2dff64be974a4efbab34b825ba831760 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2dff64be974a4efbab34b825ba831760 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:14.460 [ 0]:0x2 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2dff64be974a4efbab34b825ba831760 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2dff64be974a4efbab34b825ba831760 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:14.460 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:14.460 [2024-11-27 07:13:25.637747] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:14.460 request: 00:18:14.460 { 00:18:14.460 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:14.460 "nsid": 2, 00:18:14.460 "host": "nqn.2016-06.io.spdk:host1", 00:18:14.460 "method": "nvmf_ns_remove_host", 00:18:14.460 "req_id": 1 00:18:14.460 } 00:18:14.460 Got JSON-RPC error response 00:18:14.460 response: 00:18:14.460 { 00:18:14.460 "code": -32602, 00:18:14.460 "message": "Invalid parameters" 00:18:14.460 } 00:18:14.722 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:14.722 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:14.722 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:14.722 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:14.722 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:14.722 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:14.722 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:14.722 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:14.722 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.722 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:14.722 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.722 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:14.722 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:14.722 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:14.722 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:14.722 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:14.722 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:14.722 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:14.722 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:14.722 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:14.722 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:14.722 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:14.722 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:14.722 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:14.722 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:14.722 [ 0]:0x2 00:18:14.722 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:14.722 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:14.722 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2dff64be974a4efbab34b825ba831760 00:18:14.722 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2dff64be974a4efbab34b825ba831760 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:14.723 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:14.723 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:14.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:14.984 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2337648 00:18:14.984 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:14.984 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:14.984 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2337648 /var/tmp/host.sock 00:18:14.984 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2337648 ']' 00:18:14.984 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:14.984 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:14.984 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:14.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:14.984 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:14.984 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:14.984 [2024-11-27 07:13:26.073885] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:18:14.984 [2024-11-27 07:13:26.073939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2337648 ] 00:18:14.984 [2024-11-27 07:13:26.161115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.244 [2024-11-27 07:13:26.197341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.814 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:15.814 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:15.815 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:16.075 07:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:16.076 07:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid a4491914-faec-4772-b524-403d43d6c6b1 00:18:16.076 07:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:16.076 07:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g A4491914FAEC4772B524403D43D6C6B1 -i 00:18:16.336 07:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 02fc799f-d588-4213-b993-1e55da70ba8e 00:18:16.337 07:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:16.337 07:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 02FC799FD5884213B9931E55DA70BA8E -i 00:18:16.598 07:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:16.598 07:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:16.859 07:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:16.859 07:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:17.429 nvme0n1 00:18:17.429 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:17.429 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:17.690 nvme1n2 00:18:17.690 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:17.690 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:17.690 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:17.690 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:17.690 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:17.690 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:17.690 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:17.690 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:17.690 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:17.950 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ a4491914-faec-4772-b524-403d43d6c6b1 == \a\4\4\9\1\9\1\4\-\f\a\e\c\-\4\7\7\2\-\b\5\2\4\-\4\0\3\d\4\3\d\6\c\6\b\1 ]] 00:18:17.950 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:17.950 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:17.950 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:18.213 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 02fc799f-d588-4213-b993-1e55da70ba8e == \0\2\f\c\7\9\9\f\-\d\5\8\8\-\4\2\1\3\-\b\9\9\3\-\1\e\5\5\d\a\7\0\b\a\8\e ]] 00:18:18.213 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:18.213 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:18.477 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid a4491914-faec-4772-b524-403d43d6c6b1 00:18:18.477 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:18.477 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g A4491914FAEC4772B524403D43D6C6B1 00:18:18.477 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:18.477 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g A4491914FAEC4772B524403D43D6C6B1 00:18:18.478 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:18.478 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:18.478 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:18.478 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:18.478 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:18.478 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:18.478 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:18.478 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:18.478 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g A4491914FAEC4772B524403D43D6C6B1 00:18:18.739 [2024-11-27 07:13:29.736488] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:18:18.739 [2024-11-27 07:13:29.736517] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:18:18.739 [2024-11-27 07:13:29.736524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.739 request: 00:18:18.739 { 00:18:18.739 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.739 "namespace": { 00:18:18.739 "bdev_name": "invalid", 00:18:18.739 "nsid": 1, 00:18:18.739 "nguid": "A4491914FAEC4772B524403D43D6C6B1", 00:18:18.739 "no_auto_visible": false, 00:18:18.739 "hide_metadata": false 00:18:18.739 }, 00:18:18.739 "method": "nvmf_subsystem_add_ns", 00:18:18.739 "req_id": 1 00:18:18.739 } 00:18:18.739 Got JSON-RPC error response 00:18:18.739 response: 00:18:18.739 { 00:18:18.739 "code": -32602, 00:18:18.739 "message": "Invalid parameters" 00:18:18.739 } 00:18:18.739 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:18.739 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:18.739 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:18.740 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:18.740 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid a4491914-faec-4772-b524-403d43d6c6b1 00:18:18.740 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:18.740 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g A4491914FAEC4772B524403D43D6C6B1 -i 00:18:18.740 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:18:21.288 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:18:21.288 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:18:21.288 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:21.288 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:18:21.288 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2337648 00:18:21.288 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2337648 ']' 00:18:21.288 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2337648 00:18:21.288 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:21.288 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:21.288 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2337648 00:18:21.288 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:21.288 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:21.288 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2337648' 00:18:21.288 killing process with pid 2337648 00:18:21.288 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2337648 00:18:21.288 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2337648 00:18:21.288 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:21.549 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:21.549 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:18:21.549 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:21.549 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:18:21.549 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:21.549 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:18:21.549 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:21.549 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:21.549 rmmod nvme_tcp 00:18:21.549 rmmod nvme_fabrics 00:18:21.549 rmmod nvme_keyring 00:18:21.549 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:21.549 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:18:21.549 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:18:21.549 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2335288 ']' 00:18:21.549 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2335288 00:18:21.549 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2335288 ']' 00:18:21.549 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2335288 00:18:21.549 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:21.549 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:21.549 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2335288 00:18:21.549 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:21.549 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:21.549 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2335288' 00:18:21.549 killing process with pid 2335288 00:18:21.549 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2335288 00:18:21.549 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2335288 00:18:21.810 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:21.810 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:21.810 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:21.810 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:18:21.810 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:18:21.810 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:21.810 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:18:21.810 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:21.810 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:21.810 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:21.810 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:21.810 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:23.724 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:23.724 00:18:23.724 real 0m28.476s 00:18:23.724 user 0m32.495s 00:18:23.724 sys 0m8.227s 00:18:23.724 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:23.724 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:23.724 ************************************ 00:18:23.724 END TEST nvmf_ns_masking 00:18:23.724 ************************************ 00:18:23.986 07:13:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:23.986 07:13:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:23.986 07:13:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:23.987 07:13:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:23.987 07:13:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:23.987 ************************************ 00:18:23.987 START TEST nvmf_nvme_cli 00:18:23.987 ************************************ 00:18:23.987 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:23.987 * Looking for test storage... 00:18:23.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:23.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.987 --rc genhtml_branch_coverage=1 00:18:23.987 --rc genhtml_function_coverage=1 00:18:23.987 --rc genhtml_legend=1 00:18:23.987 --rc geninfo_all_blocks=1 00:18:23.987 --rc geninfo_unexecuted_blocks=1 00:18:23.987 00:18:23.987 ' 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:23.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.987 --rc genhtml_branch_coverage=1 00:18:23.987 --rc genhtml_function_coverage=1 00:18:23.987 --rc genhtml_legend=1 00:18:23.987 --rc geninfo_all_blocks=1 00:18:23.987 --rc geninfo_unexecuted_blocks=1 00:18:23.987 00:18:23.987 ' 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:23.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.987 --rc genhtml_branch_coverage=1 00:18:23.987 --rc genhtml_function_coverage=1 00:18:23.987 --rc genhtml_legend=1 00:18:23.987 --rc geninfo_all_blocks=1 00:18:23.987 --rc geninfo_unexecuted_blocks=1 00:18:23.987 00:18:23.987 ' 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:23.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.987 --rc genhtml_branch_coverage=1 00:18:23.987 --rc genhtml_function_coverage=1 00:18:23.987 --rc genhtml_legend=1 00:18:23.987 --rc geninfo_all_blocks=1 00:18:23.987 --rc geninfo_unexecuted_blocks=1 00:18:23.987 00:18:23.987 ' 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:23.987 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:24.252 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:24.252 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:24.252 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:24.252 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:24.252 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:24.252 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:24.252 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:24.252 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:24.252 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:24.252 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:24.252 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:24.252 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:24.252 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:18:24.252 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:24.252 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:24.252 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:24.252 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.252 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.252 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.252 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:24.252 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.253 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:18:24.253 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:24.253 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:24.253 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:24.253 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:24.253 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:24.253 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:24.253 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:24.253 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:24.253 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:24.253 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:24.253 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:24.253 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:24.253 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:24.253 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:24.253 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:24.253 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:24.253 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:24.253 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:24.253 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:24.253 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.253 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:24.253 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.253 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:24.253 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:24.253 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:18:24.253 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:32.402 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:32.402 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:18:32.402 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:32.402 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:32.402 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:32.402 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:32.402 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:32.402 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:18:32.402 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:32.402 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:18:32.402 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:18:32.402 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:18:32.402 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:18:32.402 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:18:32.402 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:32.403 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:32.403 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:32.403 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:32.403 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:32.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:32.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:18:32.403 00:18:32.403 --- 10.0.0.2 ping statistics --- 00:18:32.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.403 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:32.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:32.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:18:32.403 00:18:32.403 --- 10.0.0.1 ping statistics --- 00:18:32.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.403 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:32.403 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:32.404 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:32.404 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:32.404 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2343194 00:18:32.404 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2343194 00:18:32.404 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:32.404 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2343194 ']' 00:18:32.404 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.404 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:32.404 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.404 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:32.404 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:32.404 [2024-11-27 07:13:42.849970] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:18:32.404 [2024-11-27 07:13:42.850039] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.404 [2024-11-27 07:13:42.947321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:32.404 [2024-11-27 07:13:43.001462] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.404 [2024-11-27 07:13:43.001516] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.404 [2024-11-27 07:13:43.001525] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:32.404 [2024-11-27 07:13:43.001532] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:32.404 [2024-11-27 07:13:43.001538] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.404 [2024-11-27 07:13:43.003590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.404 [2024-11-27 07:13:43.003753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:32.404 [2024-11-27 07:13:43.003913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:32.404 [2024-11-27 07:13:43.003914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.670 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:32.670 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:18:32.670 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:32.670 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:32.670 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:32.670 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.670 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:32.670 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.671 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:32.671 [2024-11-27 07:13:43.733654] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:32.671 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.671 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:32.671 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.671 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:32.671 Malloc0 00:18:32.671 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.671 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:32.671 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.671 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:32.671 Malloc1 00:18:32.671 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.671 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:32.671 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.671 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:32.671 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.671 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:32.671 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.671 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:32.671 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.671 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:32.671 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.671 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:32.671 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.671 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:32.671 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.671 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:32.671 [2024-11-27 07:13:43.852759] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:32.671 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.671 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:32.671 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.671 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:32.671 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.671 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:18:32.934 00:18:32.934 Discovery Log Number of Records 2, Generation counter 2 00:18:32.934 =====Discovery Log Entry 0====== 00:18:32.934 trtype: tcp 00:18:32.934 adrfam: ipv4 00:18:32.934 subtype: current discovery subsystem 00:18:32.934 treq: not required 00:18:32.934 portid: 0 00:18:32.934 trsvcid: 4420 00:18:32.934 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:32.934 traddr: 10.0.0.2 00:18:32.934 eflags: explicit discovery connections, duplicate discovery information 00:18:32.934 sectype: none 00:18:32.934 =====Discovery Log Entry 1====== 00:18:32.934 trtype: tcp 00:18:32.934 adrfam: ipv4 00:18:32.934 subtype: nvme subsystem 00:18:32.934 treq: not required 00:18:32.934 portid: 0 00:18:32.934 trsvcid: 4420 00:18:32.934 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:32.934 traddr: 10.0.0.2 00:18:32.934 eflags: none 00:18:32.934 sectype: none 00:18:32.934 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:32.934 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:32.934 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:32.934 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:32.934 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:32.934 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:32.934 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:32.934 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:32.934 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:32.934 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:32.934 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:34.852 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:34.852 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:18:34.852 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:34.852 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:34.852 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:34.852 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:36.768 /dev/nvme0n2 ]] 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:36.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:36.768 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.769 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:36.769 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.769 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:36.769 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:36.769 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:36.769 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:36.769 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:36.769 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:36.769 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:36.769 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:36.769 rmmod nvme_tcp 00:18:36.769 rmmod nvme_fabrics 00:18:36.769 rmmod nvme_keyring 00:18:36.769 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:36.769 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:36.769 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:36.769 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2343194 ']' 00:18:36.769 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2343194 00:18:36.769 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2343194 ']' 00:18:36.769 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2343194 00:18:36.769 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:18:36.769 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:36.769 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2343194 00:18:37.031 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:37.031 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:37.031 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2343194' 00:18:37.031 killing process with pid 2343194 00:18:37.031 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2343194 00:18:37.031 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2343194 00:18:37.031 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:37.031 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:37.031 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:37.031 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:37.031 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:18:37.031 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:18:37.031 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:37.031 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:37.031 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:37.031 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.031 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:37.031 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:39.577 00:18:39.577 real 0m15.238s 00:18:39.577 user 0m22.811s 00:18:39.577 sys 0m6.378s 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.577 ************************************ 00:18:39.577 END TEST nvmf_nvme_cli 00:18:39.577 ************************************ 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:39.577 ************************************ 00:18:39.577 START TEST nvmf_vfio_user 00:18:39.577 ************************************ 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:39.577 * Looking for test storage... 00:18:39.577 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:39.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.577 --rc genhtml_branch_coverage=1 00:18:39.577 --rc genhtml_function_coverage=1 00:18:39.577 --rc genhtml_legend=1 00:18:39.577 --rc geninfo_all_blocks=1 00:18:39.577 --rc geninfo_unexecuted_blocks=1 00:18:39.577 00:18:39.577 ' 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:39.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.577 --rc genhtml_branch_coverage=1 00:18:39.577 --rc genhtml_function_coverage=1 00:18:39.577 --rc genhtml_legend=1 00:18:39.577 --rc geninfo_all_blocks=1 00:18:39.577 --rc geninfo_unexecuted_blocks=1 00:18:39.577 00:18:39.577 ' 00:18:39.577 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:39.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.577 --rc genhtml_branch_coverage=1 00:18:39.578 --rc genhtml_function_coverage=1 00:18:39.578 --rc genhtml_legend=1 00:18:39.578 --rc geninfo_all_blocks=1 00:18:39.578 --rc geninfo_unexecuted_blocks=1 00:18:39.578 00:18:39.578 ' 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:39.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.578 --rc genhtml_branch_coverage=1 00:18:39.578 --rc genhtml_function_coverage=1 00:18:39.578 --rc genhtml_legend=1 00:18:39.578 --rc geninfo_all_blocks=1 00:18:39.578 --rc geninfo_unexecuted_blocks=1 00:18:39.578 00:18:39.578 ' 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:39.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2344801 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2344801' 00:18:39.578 Process pid: 2344801 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2344801 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2344801 ']' 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:39.578 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:39.578 [2024-11-27 07:13:50.600825] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:18:39.578 [2024-11-27 07:13:50.600900] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.578 [2024-11-27 07:13:50.690488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:39.578 [2024-11-27 07:13:50.731666] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:39.578 [2024-11-27 07:13:50.731709] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:39.578 [2024-11-27 07:13:50.731715] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:39.578 [2024-11-27 07:13:50.731720] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:39.578 [2024-11-27 07:13:50.731725] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:39.578 [2024-11-27 07:13:50.733334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.578 [2024-11-27 07:13:50.733491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:39.578 [2024-11-27 07:13:50.733531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.578 [2024-11-27 07:13:50.733533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:40.521 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:40.521 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:18:40.521 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:41.463 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:18:41.463 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:41.463 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:41.463 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:41.463 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:41.463 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:41.724 Malloc1 00:18:41.724 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:41.985 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:41.985 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:42.246 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:42.246 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:42.246 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:42.507 Malloc2 00:18:42.507 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:42.769 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:42.769 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:43.032 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:18:43.032 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:18:43.032 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:43.032 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:43.032 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:18:43.032 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:43.032 [2024-11-27 07:13:54.136694] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:18:43.032 [2024-11-27 07:13:54.136734] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2345529 ] 00:18:43.032 [2024-11-27 07:13:54.177481] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:18:43.032 [2024-11-27 07:13:54.179724] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:43.032 [2024-11-27 07:13:54.179740] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f114d926000 00:18:43.032 [2024-11-27 07:13:54.180726] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:43.032 [2024-11-27 07:13:54.181731] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:43.032 [2024-11-27 07:13:54.184163] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:43.032 [2024-11-27 07:13:54.184749] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:43.032 [2024-11-27 07:13:54.185753] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:43.032 [2024-11-27 07:13:54.186766] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:43.032 [2024-11-27 07:13:54.187776] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:43.032 [2024-11-27 07:13:54.188776] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:43.032 [2024-11-27 07:13:54.189783] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:43.032 [2024-11-27 07:13:54.189790] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f114d91b000 00:18:43.032 [2024-11-27 07:13:54.190706] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:43.032 [2024-11-27 07:13:54.202199] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:18:43.032 [2024-11-27 07:13:54.202222] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:18:43.032 [2024-11-27 07:13:54.207887] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:43.032 [2024-11-27 07:13:54.207924] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:43.032 [2024-11-27 07:13:54.207991] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:18:43.032 [2024-11-27 07:13:54.208005] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:18:43.032 [2024-11-27 07:13:54.208010] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:18:43.032 [2024-11-27 07:13:54.208893] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:18:43.032 [2024-11-27 07:13:54.208902] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:18:43.032 [2024-11-27 07:13:54.208907] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:18:43.032 [2024-11-27 07:13:54.209898] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:43.032 [2024-11-27 07:13:54.209905] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:18:43.032 [2024-11-27 07:13:54.209911] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:43.032 [2024-11-27 07:13:54.210901] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:18:43.032 [2024-11-27 07:13:54.210907] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:43.033 [2024-11-27 07:13:54.211908] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:18:43.033 [2024-11-27 07:13:54.211915] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:43.033 [2024-11-27 07:13:54.211919] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:43.033 [2024-11-27 07:13:54.211924] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:43.033 [2024-11-27 07:13:54.212030] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:18:43.033 [2024-11-27 07:13:54.212034] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:43.033 [2024-11-27 07:13:54.212038] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:18:43.033 [2024-11-27 07:13:54.212913] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:18:43.033 [2024-11-27 07:13:54.213917] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:18:43.033 [2024-11-27 07:13:54.214919] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:43.033 [2024-11-27 07:13:54.215918] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:43.033 [2024-11-27 07:13:54.215980] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:43.033 [2024-11-27 07:13:54.216932] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:18:43.033 [2024-11-27 07:13:54.216937] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:43.033 [2024-11-27 07:13:54.216941] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:43.033 [2024-11-27 07:13:54.216956] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:18:43.033 [2024-11-27 07:13:54.216962] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:43.033 [2024-11-27 07:13:54.216976] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:43.033 [2024-11-27 07:13:54.216980] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:43.033 [2024-11-27 07:13:54.216983] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:43.033 [2024-11-27 07:13:54.216994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:43.033 [2024-11-27 07:13:54.217031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:43.033 [2024-11-27 07:13:54.217039] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:18:43.033 [2024-11-27 07:13:54.217042] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:18:43.033 [2024-11-27 07:13:54.217046] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:18:43.033 [2024-11-27 07:13:54.217049] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:43.033 [2024-11-27 07:13:54.217053] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:18:43.033 [2024-11-27 07:13:54.217056] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:18:43.033 [2024-11-27 07:13:54.217059] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:18:43.033 [2024-11-27 07:13:54.217065] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:43.033 [2024-11-27 07:13:54.217073] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:43.033 [2024-11-27 07:13:54.217087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:43.033 [2024-11-27 07:13:54.217095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:43.033 [2024-11-27 07:13:54.217102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:43.033 [2024-11-27 07:13:54.217109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:43.033 [2024-11-27 07:13:54.217115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:43.033 [2024-11-27 07:13:54.217118] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:43.033 [2024-11-27 07:13:54.217125] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:43.033 [2024-11-27 07:13:54.217132] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:43.033 [2024-11-27 07:13:54.217138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:43.033 [2024-11-27 07:13:54.217142] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:18:43.033 [2024-11-27 07:13:54.217146] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:43.033 [2024-11-27 07:13:54.217153] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:18:43.033 [2024-11-27 07:13:54.217161] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:43.033 [2024-11-27 07:13:54.217168] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:43.033 [2024-11-27 07:13:54.217175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:43.033 [2024-11-27 07:13:54.217218] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:18:43.033 [2024-11-27 07:13:54.217224] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:43.033 [2024-11-27 07:13:54.217229] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:43.033 [2024-11-27 07:13:54.217232] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:43.033 [2024-11-27 07:13:54.217234] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:43.033 [2024-11-27 07:13:54.217239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:43.033 [2024-11-27 07:13:54.217250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:43.033 [2024-11-27 07:13:54.217259] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:18:43.033 [2024-11-27 07:13:54.217268] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:18:43.033 [2024-11-27 07:13:54.217274] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:43.033 [2024-11-27 07:13:54.217279] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:43.033 [2024-11-27 07:13:54.217282] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:43.033 [2024-11-27 07:13:54.217284] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:43.033 [2024-11-27 07:13:54.217290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:43.033 [2024-11-27 07:13:54.217309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:43.033 [2024-11-27 07:13:54.217317] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:43.033 [2024-11-27 07:13:54.217323] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:43.034 [2024-11-27 07:13:54.217327] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:43.034 [2024-11-27 07:13:54.217330] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:43.034 [2024-11-27 07:13:54.217333] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:43.034 [2024-11-27 07:13:54.217337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:43.034 [2024-11-27 07:13:54.217347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:43.034 [2024-11-27 07:13:54.217354] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:43.034 [2024-11-27 07:13:54.217359] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:43.034 [2024-11-27 07:13:54.217365] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:18:43.034 [2024-11-27 07:13:54.217369] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:43.034 [2024-11-27 07:13:54.217373] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:43.034 [2024-11-27 07:13:54.217377] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:18:43.034 [2024-11-27 07:13:54.217380] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:43.034 [2024-11-27 07:13:54.217383] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:18:43.034 [2024-11-27 07:13:54.217387] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:18:43.034 [2024-11-27 07:13:54.217402] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:43.034 [2024-11-27 07:13:54.217411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:43.034 [2024-11-27 07:13:54.217419] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:43.034 [2024-11-27 07:13:54.217430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:43.034 [2024-11-27 07:13:54.217437] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:43.034 [2024-11-27 07:13:54.217444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:43.034 [2024-11-27 07:13:54.217452] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:43.034 [2024-11-27 07:13:54.217460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:43.034 [2024-11-27 07:13:54.217469] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:43.034 [2024-11-27 07:13:54.217472] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:43.034 [2024-11-27 07:13:54.217475] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:43.034 [2024-11-27 07:13:54.217477] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:43.034 [2024-11-27 07:13:54.217480] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:43.034 [2024-11-27 07:13:54.217484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:43.034 [2024-11-27 07:13:54.217489] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:43.034 [2024-11-27 07:13:54.217492] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:43.034 [2024-11-27 07:13:54.217495] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:43.034 [2024-11-27 07:13:54.217499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:43.034 [2024-11-27 07:13:54.217504] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:43.034 [2024-11-27 07:13:54.217507] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:43.034 [2024-11-27 07:13:54.217509] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:43.034 [2024-11-27 07:13:54.217514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:43.034 [2024-11-27 07:13:54.217519] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:43.034 [2024-11-27 07:13:54.217522] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:43.034 [2024-11-27 07:13:54.217524] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:43.034 [2024-11-27 07:13:54.217529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:43.034 [2024-11-27 07:13:54.217534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:43.034 [2024-11-27 07:13:54.217542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:43.034 [2024-11-27 07:13:54.217550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:43.034 [2024-11-27 07:13:54.217555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:43.034 ===================================================== 00:18:43.034 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:43.034 ===================================================== 00:18:43.034 Controller Capabilities/Features 00:18:43.034 ================================ 00:18:43.034 Vendor ID: 4e58 00:18:43.034 Subsystem Vendor ID: 4e58 00:18:43.034 Serial Number: SPDK1 00:18:43.034 Model Number: SPDK bdev Controller 00:18:43.034 Firmware Version: 25.01 00:18:43.034 Recommended Arb Burst: 6 00:18:43.034 IEEE OUI Identifier: 8d 6b 50 00:18:43.034 Multi-path I/O 00:18:43.034 May have multiple subsystem ports: Yes 00:18:43.034 May have multiple controllers: Yes 00:18:43.034 Associated with SR-IOV VF: No 00:18:43.034 Max Data Transfer Size: 131072 00:18:43.034 Max Number of Namespaces: 32 00:18:43.034 Max Number of I/O Queues: 127 00:18:43.034 NVMe Specification Version (VS): 1.3 00:18:43.034 NVMe Specification Version (Identify): 1.3 00:18:43.034 Maximum Queue Entries: 256 00:18:43.034 Contiguous Queues Required: Yes 00:18:43.034 Arbitration Mechanisms Supported 00:18:43.034 Weighted Round Robin: Not Supported 00:18:43.034 Vendor Specific: Not Supported 00:18:43.034 Reset Timeout: 15000 ms 00:18:43.034 Doorbell Stride: 4 bytes 00:18:43.034 NVM Subsystem Reset: Not Supported 00:18:43.034 Command Sets Supported 00:18:43.034 NVM Command Set: Supported 00:18:43.034 Boot Partition: Not Supported 00:18:43.034 Memory Page Size Minimum: 4096 bytes 00:18:43.034 Memory Page Size Maximum: 4096 bytes 00:18:43.034 Persistent Memory Region: Not Supported 00:18:43.034 Optional Asynchronous Events Supported 00:18:43.034 Namespace Attribute Notices: Supported 00:18:43.034 Firmware Activation Notices: Not Supported 00:18:43.034 ANA Change Notices: Not Supported 00:18:43.034 PLE Aggregate Log Change Notices: Not Supported 00:18:43.034 LBA Status Info Alert Notices: Not Supported 00:18:43.034 EGE Aggregate Log Change Notices: Not Supported 00:18:43.034 Normal NVM Subsystem Shutdown event: Not Supported 00:18:43.034 Zone Descriptor Change Notices: Not Supported 00:18:43.034 Discovery Log Change Notices: Not Supported 00:18:43.034 Controller Attributes 00:18:43.034 128-bit Host Identifier: Supported 00:18:43.034 Non-Operational Permissive Mode: Not Supported 00:18:43.034 NVM Sets: Not Supported 00:18:43.034 Read Recovery Levels: Not Supported 00:18:43.034 Endurance Groups: Not Supported 00:18:43.034 Predictable Latency Mode: Not Supported 00:18:43.034 Traffic Based Keep ALive: Not Supported 00:18:43.034 Namespace Granularity: Not Supported 00:18:43.034 SQ Associations: Not Supported 00:18:43.034 UUID List: Not Supported 00:18:43.034 Multi-Domain Subsystem: Not Supported 00:18:43.034 Fixed Capacity Management: Not Supported 00:18:43.034 Variable Capacity Management: Not Supported 00:18:43.034 Delete Endurance Group: Not Supported 00:18:43.034 Delete NVM Set: Not Supported 00:18:43.034 Extended LBA Formats Supported: Not Supported 00:18:43.034 Flexible Data Placement Supported: Not Supported 00:18:43.034 00:18:43.034 Controller Memory Buffer Support 00:18:43.034 ================================ 00:18:43.034 Supported: No 00:18:43.034 00:18:43.034 Persistent Memory Region Support 00:18:43.034 ================================ 00:18:43.034 Supported: No 00:18:43.034 00:18:43.035 Admin Command Set Attributes 00:18:43.035 ============================ 00:18:43.035 Security Send/Receive: Not Supported 00:18:43.035 Format NVM: Not Supported 00:18:43.035 Firmware Activate/Download: Not Supported 00:18:43.035 Namespace Management: Not Supported 00:18:43.035 Device Self-Test: Not Supported 00:18:43.035 Directives: Not Supported 00:18:43.035 NVMe-MI: Not Supported 00:18:43.035 Virtualization Management: Not Supported 00:18:43.035 Doorbell Buffer Config: Not Supported 00:18:43.035 Get LBA Status Capability: Not Supported 00:18:43.035 Command & Feature Lockdown Capability: Not Supported 00:18:43.035 Abort Command Limit: 4 00:18:43.035 Async Event Request Limit: 4 00:18:43.035 Number of Firmware Slots: N/A 00:18:43.035 Firmware Slot 1 Read-Only: N/A 00:18:43.035 Firmware Activation Without Reset: N/A 00:18:43.035 Multiple Update Detection Support: N/A 00:18:43.035 Firmware Update Granularity: No Information Provided 00:18:43.035 Per-Namespace SMART Log: No 00:18:43.035 Asymmetric Namespace Access Log Page: Not Supported 00:18:43.035 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:18:43.035 Command Effects Log Page: Supported 00:18:43.035 Get Log Page Extended Data: Supported 00:18:43.035 Telemetry Log Pages: Not Supported 00:18:43.035 Persistent Event Log Pages: Not Supported 00:18:43.035 Supported Log Pages Log Page: May Support 00:18:43.035 Commands Supported & Effects Log Page: Not Supported 00:18:43.035 Feature Identifiers & Effects Log Page:May Support 00:18:43.035 NVMe-MI Commands & Effects Log Page: May Support 00:18:43.035 Data Area 4 for Telemetry Log: Not Supported 00:18:43.035 Error Log Page Entries Supported: 128 00:18:43.035 Keep Alive: Supported 00:18:43.035 Keep Alive Granularity: 10000 ms 00:18:43.035 00:18:43.035 NVM Command Set Attributes 00:18:43.035 ========================== 00:18:43.035 Submission Queue Entry Size 00:18:43.035 Max: 64 00:18:43.035 Min: 64 00:18:43.035 Completion Queue Entry Size 00:18:43.035 Max: 16 00:18:43.035 Min: 16 00:18:43.035 Number of Namespaces: 32 00:18:43.035 Compare Command: Supported 00:18:43.035 Write Uncorrectable Command: Not Supported 00:18:43.035 Dataset Management Command: Supported 00:18:43.035 Write Zeroes Command: Supported 00:18:43.035 Set Features Save Field: Not Supported 00:18:43.035 Reservations: Not Supported 00:18:43.035 Timestamp: Not Supported 00:18:43.035 Copy: Supported 00:18:43.035 Volatile Write Cache: Present 00:18:43.035 Atomic Write Unit (Normal): 1 00:18:43.035 Atomic Write Unit (PFail): 1 00:18:43.035 Atomic Compare & Write Unit: 1 00:18:43.035 Fused Compare & Write: Supported 00:18:43.035 Scatter-Gather List 00:18:43.035 SGL Command Set: Supported (Dword aligned) 00:18:43.035 SGL Keyed: Not Supported 00:18:43.035 SGL Bit Bucket Descriptor: Not Supported 00:18:43.035 SGL Metadata Pointer: Not Supported 00:18:43.035 Oversized SGL: Not Supported 00:18:43.035 SGL Metadata Address: Not Supported 00:18:43.035 SGL Offset: Not Supported 00:18:43.035 Transport SGL Data Block: Not Supported 00:18:43.035 Replay Protected Memory Block: Not Supported 00:18:43.035 00:18:43.035 Firmware Slot Information 00:18:43.035 ========================= 00:18:43.035 Active slot: 1 00:18:43.035 Slot 1 Firmware Revision: 25.01 00:18:43.035 00:18:43.035 00:18:43.035 Commands Supported and Effects 00:18:43.035 ============================== 00:18:43.035 Admin Commands 00:18:43.035 -------------- 00:18:43.035 Get Log Page (02h): Supported 00:18:43.035 Identify (06h): Supported 00:18:43.035 Abort (08h): Supported 00:18:43.035 Set Features (09h): Supported 00:18:43.035 Get Features (0Ah): Supported 00:18:43.035 Asynchronous Event Request (0Ch): Supported 00:18:43.035 Keep Alive (18h): Supported 00:18:43.035 I/O Commands 00:18:43.035 ------------ 00:18:43.035 Flush (00h): Supported LBA-Change 00:18:43.035 Write (01h): Supported LBA-Change 00:18:43.035 Read (02h): Supported 00:18:43.035 Compare (05h): Supported 00:18:43.035 Write Zeroes (08h): Supported LBA-Change 00:18:43.035 Dataset Management (09h): Supported LBA-Change 00:18:43.035 Copy (19h): Supported LBA-Change 00:18:43.035 00:18:43.035 Error Log 00:18:43.035 ========= 00:18:43.035 00:18:43.035 Arbitration 00:18:43.035 =========== 00:18:43.035 Arbitration Burst: 1 00:18:43.035 00:18:43.035 Power Management 00:18:43.035 ================ 00:18:43.035 Number of Power States: 1 00:18:43.035 Current Power State: Power State #0 00:18:43.035 Power State #0: 00:18:43.035 Max Power: 0.00 W 00:18:43.035 Non-Operational State: Operational 00:18:43.035 Entry Latency: Not Reported 00:18:43.035 Exit Latency: Not Reported 00:18:43.035 Relative Read Throughput: 0 00:18:43.035 Relative Read Latency: 0 00:18:43.035 Relative Write Throughput: 0 00:18:43.035 Relative Write Latency: 0 00:18:43.035 Idle Power: Not Reported 00:18:43.035 Active Power: Not Reported 00:18:43.035 Non-Operational Permissive Mode: Not Supported 00:18:43.035 00:18:43.035 Health Information 00:18:43.035 ================== 00:18:43.035 Critical Warnings: 00:18:43.035 Available Spare Space: OK 00:18:43.035 Temperature: OK 00:18:43.035 Device Reliability: OK 00:18:43.035 Read Only: No 00:18:43.035 Volatile Memory Backup: OK 00:18:43.035 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:43.035 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:43.035 Available Spare: 0% 00:18:43.035 Available Sp[2024-11-27 07:13:54.217629] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:43.035 [2024-11-27 07:13:54.217637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:43.035 [2024-11-27 07:13:54.217659] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:18:43.035 [2024-11-27 07:13:54.217666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.035 [2024-11-27 07:13:54.217670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.035 [2024-11-27 07:13:54.217675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.035 [2024-11-27 07:13:54.217680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.035 [2024-11-27 07:13:54.217936] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:43.035 [2024-11-27 07:13:54.217944] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:18:43.035 [2024-11-27 07:13:54.218941] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:43.035 [2024-11-27 07:13:54.218979] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:18:43.035 [2024-11-27 07:13:54.218984] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:18:43.035 [2024-11-27 07:13:54.219949] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:18:43.035 [2024-11-27 07:13:54.219957] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:18:43.035 [2024-11-27 07:13:54.220011] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:18:43.035 [2024-11-27 07:13:54.220975] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:43.297 are Threshold: 0% 00:18:43.297 Life Percentage Used: 0% 00:18:43.297 Data Units Read: 0 00:18:43.297 Data Units Written: 0 00:18:43.297 Host Read Commands: 0 00:18:43.297 Host Write Commands: 0 00:18:43.297 Controller Busy Time: 0 minutes 00:18:43.297 Power Cycles: 0 00:18:43.297 Power On Hours: 0 hours 00:18:43.297 Unsafe Shutdowns: 0 00:18:43.297 Unrecoverable Media Errors: 0 00:18:43.297 Lifetime Error Log Entries: 0 00:18:43.297 Warning Temperature Time: 0 minutes 00:18:43.297 Critical Temperature Time: 0 minutes 00:18:43.297 00:18:43.297 Number of Queues 00:18:43.297 ================ 00:18:43.297 Number of I/O Submission Queues: 127 00:18:43.297 Number of I/O Completion Queues: 127 00:18:43.297 00:18:43.297 Active Namespaces 00:18:43.297 ================= 00:18:43.297 Namespace ID:1 00:18:43.297 Error Recovery Timeout: Unlimited 00:18:43.297 Command Set Identifier: NVM (00h) 00:18:43.297 Deallocate: Supported 00:18:43.297 Deallocated/Unwritten Error: Not Supported 00:18:43.297 Deallocated Read Value: Unknown 00:18:43.297 Deallocate in Write Zeroes: Not Supported 00:18:43.297 Deallocated Guard Field: 0xFFFF 00:18:43.297 Flush: Supported 00:18:43.297 Reservation: Supported 00:18:43.297 Namespace Sharing Capabilities: Multiple Controllers 00:18:43.297 Size (in LBAs): 131072 (0GiB) 00:18:43.297 Capacity (in LBAs): 131072 (0GiB) 00:18:43.297 Utilization (in LBAs): 131072 (0GiB) 00:18:43.297 NGUID: 87A003005ABA43F4809FA79F5859C8B1 00:18:43.297 UUID: 87a00300-5aba-43f4-809f-a79f5859c8b1 00:18:43.297 Thin Provisioning: Not Supported 00:18:43.297 Per-NS Atomic Units: Yes 00:18:43.297 Atomic Boundary Size (Normal): 0 00:18:43.297 Atomic Boundary Size (PFail): 0 00:18:43.297 Atomic Boundary Offset: 0 00:18:43.297 Maximum Single Source Range Length: 65535 00:18:43.297 Maximum Copy Length: 65535 00:18:43.297 Maximum Source Range Count: 1 00:18:43.297 NGUID/EUI64 Never Reused: No 00:18:43.297 Namespace Write Protected: No 00:18:43.297 Number of LBA Formats: 1 00:18:43.297 Current LBA Format: LBA Format #00 00:18:43.297 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:43.297 00:18:43.297 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:43.297 [2024-11-27 07:13:54.412883] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:48.585 Initializing NVMe Controllers 00:18:48.585 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:48.585 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:48.585 Initialization complete. Launching workers. 00:18:48.585 ======================================================== 00:18:48.585 Latency(us) 00:18:48.585 Device Information : IOPS MiB/s Average min max 00:18:48.585 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39993.70 156.23 3200.73 856.49 6801.87 00:18:48.585 ======================================================== 00:18:48.585 Total : 39993.70 156.23 3200.73 856.49 6801.87 00:18:48.585 00:18:48.585 [2024-11-27 07:13:59.433622] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:48.585 07:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:48.585 [2024-11-27 07:13:59.627430] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:53.874 Initializing NVMe Controllers 00:18:53.874 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:53.874 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:53.874 Initialization complete. Launching workers. 00:18:53.874 ======================================================== 00:18:53.874 Latency(us) 00:18:53.874 Device Information : IOPS MiB/s Average min max 00:18:53.874 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7980.72 7631.83 8070.19 00:18:53.874 ======================================================== 00:18:53.874 Total : 16051.20 62.70 7980.72 7631.83 8070.19 00:18:53.874 00:18:53.874 [2024-11-27 07:14:04.663843] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:53.874 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:53.874 [2024-11-27 07:14:04.863680] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:59.168 [2024-11-27 07:14:09.957485] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:59.168 Initializing NVMe Controllers 00:18:59.168 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:59.168 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:59.168 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:59.168 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:59.168 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:59.168 Initialization complete. Launching workers. 00:18:59.168 Starting thread on core 2 00:18:59.168 Starting thread on core 3 00:18:59.168 Starting thread on core 1 00:18:59.168 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:59.168 [2024-11-27 07:14:10.216482] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:02.468 [2024-11-27 07:14:13.272547] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:02.468 Initializing NVMe Controllers 00:19:02.468 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:02.468 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:02.468 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:19:02.468 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:19:02.468 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:19:02.468 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:19:02.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:02.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:02.468 Initialization complete. Launching workers. 00:19:02.468 Starting thread on core 1 with urgent priority queue 00:19:02.468 Starting thread on core 2 with urgent priority queue 00:19:02.468 Starting thread on core 3 with urgent priority queue 00:19:02.468 Starting thread on core 0 with urgent priority queue 00:19:02.468 SPDK bdev Controller (SPDK1 ) core 0: 9105.67 IO/s 10.98 secs/100000 ios 00:19:02.468 SPDK bdev Controller (SPDK1 ) core 1: 13738.33 IO/s 7.28 secs/100000 ios 00:19:02.468 SPDK bdev Controller (SPDK1 ) core 2: 8343.00 IO/s 11.99 secs/100000 ios 00:19:02.468 SPDK bdev Controller (SPDK1 ) core 3: 13177.33 IO/s 7.59 secs/100000 ios 00:19:02.468 ======================================================== 00:19:02.468 00:19:02.468 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:02.468 [2024-11-27 07:14:13.523599] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:02.468 Initializing NVMe Controllers 00:19:02.468 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:02.468 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:02.468 Namespace ID: 1 size: 0GB 00:19:02.468 Initialization complete. 00:19:02.468 INFO: using host memory buffer for IO 00:19:02.468 Hello world! 00:19:02.468 [2024-11-27 07:14:13.557779] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:02.468 07:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:02.729 [2024-11-27 07:14:13.797494] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:03.674 Initializing NVMe Controllers 00:19:03.674 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:03.674 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:03.674 Initialization complete. Launching workers. 00:19:03.674 submit (in ns) avg, min, max = 5552.3, 2826.7, 3998551.7 00:19:03.674 complete (in ns) avg, min, max = 15779.8, 1661.7, 3998151.7 00:19:03.674 00:19:03.674 Submit histogram 00:19:03.674 ================ 00:19:03.674 Range in us Cumulative Count 00:19:03.674 2.827 - 2.840: 0.2279% ( 46) 00:19:03.674 2.840 - 2.853: 1.1792% ( 192) 00:19:03.674 2.853 - 2.867: 3.5079% ( 470) 00:19:03.674 2.867 - 2.880: 8.6459% ( 1037) 00:19:03.674 2.880 - 2.893: 14.5221% ( 1186) 00:19:03.674 2.893 - 2.907: 20.0614% ( 1118) 00:19:03.674 2.907 - 2.920: 26.2003% ( 1239) 00:19:03.674 2.920 - 2.933: 32.1954% ( 1210) 00:19:03.674 2.933 - 2.947: 36.7190% ( 913) 00:19:03.674 2.947 - 2.960: 41.5151% ( 968) 00:19:03.674 2.960 - 2.973: 46.9851% ( 1104) 00:19:03.674 2.973 - 2.987: 54.3774% ( 1492) 00:19:03.674 2.987 - 3.000: 62.6369% ( 1667) 00:19:03.674 3.000 - 3.013: 72.3827% ( 1967) 00:19:03.674 3.013 - 3.027: 80.3696% ( 1612) 00:19:03.674 3.027 - 3.040: 86.9494% ( 1328) 00:19:03.674 3.040 - 3.053: 91.1708% ( 852) 00:19:03.674 3.053 - 3.067: 94.7282% ( 718) 00:19:03.674 3.067 - 3.080: 96.9777% ( 454) 00:19:03.674 3.080 - 3.093: 98.3005% ( 267) 00:19:03.674 3.093 - 3.107: 98.9100% ( 123) 00:19:03.674 3.107 - 3.120: 99.2122% ( 61) 00:19:03.674 3.120 - 3.133: 99.4253% ( 43) 00:19:03.674 3.133 - 3.147: 99.5442% ( 24) 00:19:03.674 3.147 - 3.160: 99.5789% ( 7) 00:19:03.674 3.160 - 3.173: 99.6086% ( 6) 00:19:03.674 3.173 - 3.187: 99.6234% ( 3) 00:19:03.674 3.200 - 3.213: 99.6433% ( 4) 00:19:03.674 3.600 - 3.627: 99.6532% ( 2) 00:19:03.674 3.760 - 3.787: 99.6581% ( 1) 00:19:03.674 3.840 - 3.867: 99.6631% ( 1) 00:19:03.674 4.347 - 4.373: 99.6680% ( 1) 00:19:03.674 4.427 - 4.453: 99.6730% ( 1) 00:19:03.674 4.507 - 4.533: 99.6779% ( 1) 00:19:03.674 4.720 - 4.747: 99.6829% ( 1) 00:19:03.674 4.853 - 4.880: 99.6879% ( 1) 00:19:03.674 5.013 - 5.040: 99.6978% ( 2) 00:19:03.674 5.040 - 5.067: 99.7027% ( 1) 00:19:03.674 5.093 - 5.120: 99.7077% ( 1) 00:19:03.674 5.120 - 5.147: 99.7176% ( 2) 00:19:03.674 5.253 - 5.280: 99.7225% ( 1) 00:19:03.674 5.760 - 5.787: 99.7275% ( 1) 00:19:03.674 5.840 - 5.867: 99.7473% ( 4) 00:19:03.674 5.893 - 5.920: 99.7523% ( 1) 00:19:03.674 5.920 - 5.947: 99.7622% ( 2) 00:19:03.674 5.947 - 5.973: 99.7671% ( 1) 00:19:03.674 6.000 - 6.027: 99.7721% ( 1) 00:19:03.674 6.027 - 6.053: 99.7820% ( 2) 00:19:03.674 6.080 - 6.107: 99.7869% ( 1) 00:19:03.674 6.107 - 6.133: 99.7919% ( 1) 00:19:03.674 6.160 - 6.187: 99.8018% ( 2) 00:19:03.674 6.213 - 6.240: 99.8068% ( 1) 00:19:03.674 6.240 - 6.267: 99.8117% ( 1) 00:19:03.674 6.373 - 6.400: 99.8167% ( 1) 00:19:03.674 6.400 - 6.427: 99.8216% ( 1) 00:19:03.674 6.427 - 6.453: 99.8266% ( 1) 00:19:03.674 6.453 - 6.480: 99.8315% ( 1) 00:19:03.674 6.480 - 6.507: 99.8415% ( 2) 00:19:03.674 6.560 - 6.587: 99.8464% ( 1) 00:19:03.674 6.587 - 6.613: 99.8514% ( 1) 00:19:03.674 6.667 - 6.693: 99.8563% ( 1) 00:19:03.674 6.720 - 6.747: 99.8613% ( 1) 00:19:03.674 6.827 - 6.880: 99.8761% ( 3) 00:19:03.674 6.880 - 6.933: 99.8811% ( 1) 00:19:03.674 6.933 - 6.987: 99.8860% ( 1) 00:19:03.674 7.147 - 7.200: 99.9059% ( 4) 00:19:03.674 7.200 - 7.253: 99.9108% ( 1) 00:19:03.674 7.253 - 7.307: 99.9158% ( 1) 00:19:03.674 7.360 - 7.413: 99.9207% ( 1) 00:19:03.674 7.627 - 7.680: 99.9257% ( 1) 00:19:03.674 10.187 - 10.240: 99.9306% ( 1) 00:19:03.674 12.160 - 12.213: 99.9356% ( 1) 00:19:03.674 3986.773 - 4014.080: 100.0000% ( 13) 00:19:03.674 00:19:03.674 Complete histogram 00:19:03.674 ================== 00:19:03.674 Range in us Cumulative Count 00:19:03.674 1.660 - 1.667: 0.0396% ( 8) 00:19:03.674 1.667 - 1.673: 0.6045% ( 114) 00:19:03.674 1.673 - 1.680: 0.8819% ( 56) 00:19:03.674 1.680 - 1.687: 1.0355% ( 31) 00:19:03.674 1.687 - [2024-11-27 07:14:14.818979] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:03.674 1.693: 1.1842% ( 30) 00:19:03.674 1.693 - 1.700: 1.2585% ( 15) 00:19:03.674 1.700 - 1.707: 1.2981% ( 8) 00:19:03.674 1.707 - 1.720: 39.6522% ( 7741) 00:19:03.674 1.720 - 1.733: 65.8772% ( 5293) 00:19:03.674 1.733 - 1.747: 79.6710% ( 2784) 00:19:03.674 1.747 - 1.760: 83.2731% ( 727) 00:19:03.674 1.760 - 1.773: 84.1599% ( 179) 00:19:03.674 1.773 - 1.787: 88.8768% ( 952) 00:19:03.674 1.787 - 1.800: 94.6291% ( 1161) 00:19:03.674 1.800 - 1.813: 97.8744% ( 655) 00:19:03.674 1.813 - 1.827: 99.0041% ( 228) 00:19:03.674 1.827 - 1.840: 99.3955% ( 79) 00:19:03.674 1.840 - 1.853: 99.4798% ( 17) 00:19:03.674 1.853 - 1.867: 99.5045% ( 5) 00:19:03.674 2.027 - 2.040: 99.5095% ( 1) 00:19:03.674 4.267 - 4.293: 99.5144% ( 1) 00:19:03.674 4.427 - 4.453: 99.5194% ( 1) 00:19:03.674 4.507 - 4.533: 99.5244% ( 1) 00:19:03.674 4.720 - 4.747: 99.5293% ( 1) 00:19:03.674 4.907 - 4.933: 99.5343% ( 1) 00:19:03.674 4.960 - 4.987: 99.5392% ( 1) 00:19:03.674 5.120 - 5.147: 99.5491% ( 2) 00:19:03.674 5.147 - 5.173: 99.5541% ( 1) 00:19:03.674 5.200 - 5.227: 99.5689% ( 3) 00:19:03.674 5.253 - 5.280: 99.5739% ( 1) 00:19:03.674 5.280 - 5.307: 99.5789% ( 1) 00:19:03.674 5.307 - 5.333: 99.5888% ( 2) 00:19:03.674 5.360 - 5.387: 99.5987% ( 2) 00:19:03.674 5.413 - 5.440: 99.6036% ( 1) 00:19:03.674 5.547 - 5.573: 99.6086% ( 1) 00:19:03.674 5.600 - 5.627: 99.6135% ( 1) 00:19:03.674 5.760 - 5.787: 99.6185% ( 1) 00:19:03.674 5.787 - 5.813: 99.6234% ( 1) 00:19:03.674 5.973 - 6.000: 99.6284% ( 1) 00:19:03.674 6.453 - 6.480: 99.6334% ( 1) 00:19:03.674 8.907 - 8.960: 99.6383% ( 1) 00:19:03.674 11.147 - 11.200: 99.6433% ( 1) 00:19:03.674 11.840 - 11.893: 99.6482% ( 1) 00:19:03.674 3713.707 - 3741.013: 99.6532% ( 1) 00:19:03.674 3986.773 - 4014.080: 100.0000% ( 70) 00:19:03.674 00:19:03.674 07:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:19:03.674 07:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:19:03.674 07:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:19:03.674 07:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:19:03.674 07:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:03.936 [ 00:19:03.936 { 00:19:03.936 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:03.936 "subtype": "Discovery", 00:19:03.936 "listen_addresses": [], 00:19:03.936 "allow_any_host": true, 00:19:03.936 "hosts": [] 00:19:03.936 }, 00:19:03.936 { 00:19:03.936 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:03.936 "subtype": "NVMe", 00:19:03.936 "listen_addresses": [ 00:19:03.936 { 00:19:03.936 "trtype": "VFIOUSER", 00:19:03.936 "adrfam": "IPv4", 00:19:03.936 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:03.936 "trsvcid": "0" 00:19:03.936 } 00:19:03.936 ], 00:19:03.936 "allow_any_host": true, 00:19:03.936 "hosts": [], 00:19:03.936 "serial_number": "SPDK1", 00:19:03.936 "model_number": "SPDK bdev Controller", 00:19:03.936 "max_namespaces": 32, 00:19:03.936 "min_cntlid": 1, 00:19:03.936 "max_cntlid": 65519, 00:19:03.936 "namespaces": [ 00:19:03.936 { 00:19:03.936 "nsid": 1, 00:19:03.936 "bdev_name": "Malloc1", 00:19:03.936 "name": "Malloc1", 00:19:03.936 "nguid": "87A003005ABA43F4809FA79F5859C8B1", 00:19:03.936 "uuid": "87a00300-5aba-43f4-809f-a79f5859c8b1" 00:19:03.936 } 00:19:03.936 ] 00:19:03.936 }, 00:19:03.936 { 00:19:03.936 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:03.936 "subtype": "NVMe", 00:19:03.936 "listen_addresses": [ 00:19:03.936 { 00:19:03.936 "trtype": "VFIOUSER", 00:19:03.936 "adrfam": "IPv4", 00:19:03.936 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:03.936 "trsvcid": "0" 00:19:03.936 } 00:19:03.936 ], 00:19:03.936 "allow_any_host": true, 00:19:03.936 "hosts": [], 00:19:03.936 "serial_number": "SPDK2", 00:19:03.936 "model_number": "SPDK bdev Controller", 00:19:03.936 "max_namespaces": 32, 00:19:03.936 "min_cntlid": 1, 00:19:03.936 "max_cntlid": 65519, 00:19:03.936 "namespaces": [ 00:19:03.936 { 00:19:03.936 "nsid": 1, 00:19:03.936 "bdev_name": "Malloc2", 00:19:03.936 "name": "Malloc2", 00:19:03.936 "nguid": "EB8FB61D97504721BD3FE1B9428AC58F", 00:19:03.936 "uuid": "eb8fb61d-9750-4721-bd3f-e1b9428ac58f" 00:19:03.936 } 00:19:03.936 ] 00:19:03.936 } 00:19:03.936 ] 00:19:03.936 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:03.936 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2349677 00:19:03.936 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:03.936 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:19:03.936 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:19:03.936 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:03.936 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:03.936 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:19:03.936 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:03.936 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:19:04.197 [2024-11-27 07:14:15.195929] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:04.197 Malloc3 00:19:04.197 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:19:04.197 [2024-11-27 07:14:15.392276] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:04.458 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:04.458 Asynchronous Event Request test 00:19:04.458 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:04.458 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:04.458 Registering asynchronous event callbacks... 00:19:04.458 Starting namespace attribute notice tests for all controllers... 00:19:04.458 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:04.458 aer_cb - Changed Namespace 00:19:04.458 Cleaning up... 00:19:04.458 [ 00:19:04.458 { 00:19:04.458 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:04.458 "subtype": "Discovery", 00:19:04.458 "listen_addresses": [], 00:19:04.458 "allow_any_host": true, 00:19:04.458 "hosts": [] 00:19:04.458 }, 00:19:04.458 { 00:19:04.458 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:04.458 "subtype": "NVMe", 00:19:04.458 "listen_addresses": [ 00:19:04.458 { 00:19:04.458 "trtype": "VFIOUSER", 00:19:04.458 "adrfam": "IPv4", 00:19:04.458 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:04.458 "trsvcid": "0" 00:19:04.458 } 00:19:04.458 ], 00:19:04.458 "allow_any_host": true, 00:19:04.458 "hosts": [], 00:19:04.458 "serial_number": "SPDK1", 00:19:04.458 "model_number": "SPDK bdev Controller", 00:19:04.458 "max_namespaces": 32, 00:19:04.458 "min_cntlid": 1, 00:19:04.458 "max_cntlid": 65519, 00:19:04.458 "namespaces": [ 00:19:04.458 { 00:19:04.458 "nsid": 1, 00:19:04.458 "bdev_name": "Malloc1", 00:19:04.458 "name": "Malloc1", 00:19:04.458 "nguid": "87A003005ABA43F4809FA79F5859C8B1", 00:19:04.458 "uuid": "87a00300-5aba-43f4-809f-a79f5859c8b1" 00:19:04.458 }, 00:19:04.458 { 00:19:04.458 "nsid": 2, 00:19:04.458 "bdev_name": "Malloc3", 00:19:04.458 "name": "Malloc3", 00:19:04.458 "nguid": "ADACA57B0F9F4F71B9F99717984D0F0B", 00:19:04.458 "uuid": "adaca57b-0f9f-4f71-b9f9-9717984d0f0b" 00:19:04.458 } 00:19:04.458 ] 00:19:04.458 }, 00:19:04.458 { 00:19:04.458 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:04.458 "subtype": "NVMe", 00:19:04.458 "listen_addresses": [ 00:19:04.458 { 00:19:04.458 "trtype": "VFIOUSER", 00:19:04.458 "adrfam": "IPv4", 00:19:04.458 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:04.458 "trsvcid": "0" 00:19:04.458 } 00:19:04.458 ], 00:19:04.458 "allow_any_host": true, 00:19:04.458 "hosts": [], 00:19:04.458 "serial_number": "SPDK2", 00:19:04.458 "model_number": "SPDK bdev Controller", 00:19:04.458 "max_namespaces": 32, 00:19:04.458 "min_cntlid": 1, 00:19:04.458 "max_cntlid": 65519, 00:19:04.458 "namespaces": [ 00:19:04.458 { 00:19:04.458 "nsid": 1, 00:19:04.458 "bdev_name": "Malloc2", 00:19:04.458 "name": "Malloc2", 00:19:04.458 "nguid": "EB8FB61D97504721BD3FE1B9428AC58F", 00:19:04.458 "uuid": "eb8fb61d-9750-4721-bd3f-e1b9428ac58f" 00:19:04.458 } 00:19:04.458 ] 00:19:04.458 } 00:19:04.458 ] 00:19:04.458 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2349677 00:19:04.458 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:04.458 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:04.458 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:19:04.458 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:19:04.458 [2024-11-27 07:14:15.620488] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:19:04.458 [2024-11-27 07:14:15.620531] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2349742 ] 00:19:04.458 [2024-11-27 07:14:15.658399] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:19:04.721 [2024-11-27 07:14:15.667358] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:04.721 [2024-11-27 07:14:15.667378] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7faf4ffdf000 00:19:04.721 [2024-11-27 07:14:15.668357] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:04.721 [2024-11-27 07:14:15.669360] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:04.721 [2024-11-27 07:14:15.670369] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:04.721 [2024-11-27 07:14:15.671373] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:04.721 [2024-11-27 07:14:15.672381] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:04.721 [2024-11-27 07:14:15.673387] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:04.721 [2024-11-27 07:14:15.674389] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:04.721 [2024-11-27 07:14:15.675392] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:04.721 [2024-11-27 07:14:15.676403] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:04.722 [2024-11-27 07:14:15.676410] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7faf4ffd4000 00:19:04.722 [2024-11-27 07:14:15.677322] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:04.722 [2024-11-27 07:14:15.686697] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:19:04.722 [2024-11-27 07:14:15.686715] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:19:04.722 [2024-11-27 07:14:15.691782] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:04.722 [2024-11-27 07:14:15.691814] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:19:04.722 [2024-11-27 07:14:15.691873] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:19:04.722 [2024-11-27 07:14:15.691886] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:19:04.722 [2024-11-27 07:14:15.691890] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:19:04.722 [2024-11-27 07:14:15.692783] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:19:04.722 [2024-11-27 07:14:15.692792] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:19:04.722 [2024-11-27 07:14:15.692797] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:19:04.722 [2024-11-27 07:14:15.693792] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:04.722 [2024-11-27 07:14:15.693799] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:19:04.722 [2024-11-27 07:14:15.693807] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:19:04.722 [2024-11-27 07:14:15.694798] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:19:04.722 [2024-11-27 07:14:15.694805] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:04.722 [2024-11-27 07:14:15.695801] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:19:04.722 [2024-11-27 07:14:15.695808] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:19:04.722 [2024-11-27 07:14:15.695811] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:19:04.722 [2024-11-27 07:14:15.695816] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:04.722 [2024-11-27 07:14:15.695922] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:19:04.722 [2024-11-27 07:14:15.695926] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:04.722 [2024-11-27 07:14:15.695930] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:19:04.722 [2024-11-27 07:14:15.696813] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:19:04.722 [2024-11-27 07:14:15.697816] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:19:04.722 [2024-11-27 07:14:15.698824] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:04.722 [2024-11-27 07:14:15.699830] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:04.722 [2024-11-27 07:14:15.699859] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:04.722 [2024-11-27 07:14:15.700839] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:19:04.722 [2024-11-27 07:14:15.700845] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:04.722 [2024-11-27 07:14:15.700849] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:19:04.722 [2024-11-27 07:14:15.700864] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:19:04.722 [2024-11-27 07:14:15.700869] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:19:04.722 [2024-11-27 07:14:15.700880] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:04.722 [2024-11-27 07:14:15.700884] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:04.722 [2024-11-27 07:14:15.700886] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:04.722 [2024-11-27 07:14:15.700895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:04.722 [2024-11-27 07:14:15.708167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:19:04.722 [2024-11-27 07:14:15.708178] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:19:04.722 [2024-11-27 07:14:15.708181] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:19:04.722 [2024-11-27 07:14:15.708185] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:19:04.722 [2024-11-27 07:14:15.708188] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:19:04.722 [2024-11-27 07:14:15.708192] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:19:04.722 [2024-11-27 07:14:15.708196] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:19:04.722 [2024-11-27 07:14:15.708199] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:19:04.722 [2024-11-27 07:14:15.708205] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:19:04.722 [2024-11-27 07:14:15.708213] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:19:04.722 [2024-11-27 07:14:15.716164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:19:04.722 [2024-11-27 07:14:15.716173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.722 [2024-11-27 07:14:15.716180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.722 [2024-11-27 07:14:15.716186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.722 [2024-11-27 07:14:15.716191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.722 [2024-11-27 07:14:15.716195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:19:04.722 [2024-11-27 07:14:15.716201] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:04.722 [2024-11-27 07:14:15.716208] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:19:04.722 [2024-11-27 07:14:15.724164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:19:04.722 [2024-11-27 07:14:15.724176] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:19:04.722 [2024-11-27 07:14:15.724180] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:04.722 [2024-11-27 07:14:15.724187] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:19:04.722 [2024-11-27 07:14:15.724191] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:19:04.722 [2024-11-27 07:14:15.724198] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:04.722 [2024-11-27 07:14:15.732163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:19:04.722 [2024-11-27 07:14:15.732209] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:19:04.722 [2024-11-27 07:14:15.732219] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:19:04.722 [2024-11-27 07:14:15.732224] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:19:04.722 [2024-11-27 07:14:15.732227] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:19:04.722 [2024-11-27 07:14:15.732230] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:04.722 [2024-11-27 07:14:15.732235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:19:04.722 [2024-11-27 07:14:15.740162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:19:04.722 [2024-11-27 07:14:15.740172] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:19:04.722 [2024-11-27 07:14:15.740178] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:19:04.722 [2024-11-27 07:14:15.740184] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:19:04.722 [2024-11-27 07:14:15.740189] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:04.722 [2024-11-27 07:14:15.740192] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:04.723 [2024-11-27 07:14:15.740194] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:04.723 [2024-11-27 07:14:15.740199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:04.723 [2024-11-27 07:14:15.748164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:19:04.723 [2024-11-27 07:14:15.748173] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:04.723 [2024-11-27 07:14:15.748178] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:04.723 [2024-11-27 07:14:15.748184] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:04.723 [2024-11-27 07:14:15.748187] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:04.723 [2024-11-27 07:14:15.748189] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:04.723 [2024-11-27 07:14:15.748193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:04.723 [2024-11-27 07:14:15.756164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:19:04.723 [2024-11-27 07:14:15.756174] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:04.723 [2024-11-27 07:14:15.756179] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:19:04.723 [2024-11-27 07:14:15.756184] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:19:04.723 [2024-11-27 07:14:15.756189] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:19:04.723 [2024-11-27 07:14:15.756192] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:04.723 [2024-11-27 07:14:15.756198] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:19:04.723 [2024-11-27 07:14:15.756202] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:19:04.723 [2024-11-27 07:14:15.756205] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:19:04.723 [2024-11-27 07:14:15.756209] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:19:04.723 [2024-11-27 07:14:15.756222] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:19:04.723 [2024-11-27 07:14:15.764164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:19:04.723 [2024-11-27 07:14:15.764175] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:19:04.723 [2024-11-27 07:14:15.772165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:19:04.723 [2024-11-27 07:14:15.772175] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:19:04.723 [2024-11-27 07:14:15.780165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:19:04.723 [2024-11-27 07:14:15.780175] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:04.723 [2024-11-27 07:14:15.788164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:19:04.723 [2024-11-27 07:14:15.788176] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:19:04.723 [2024-11-27 07:14:15.788179] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:19:04.723 [2024-11-27 07:14:15.788182] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:19:04.723 [2024-11-27 07:14:15.788185] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:19:04.723 [2024-11-27 07:14:15.788187] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:19:04.723 [2024-11-27 07:14:15.788192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:19:04.723 [2024-11-27 07:14:15.788197] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:19:04.723 [2024-11-27 07:14:15.788200] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:19:04.723 [2024-11-27 07:14:15.788203] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:04.723 [2024-11-27 07:14:15.788208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:19:04.723 [2024-11-27 07:14:15.788213] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:19:04.723 [2024-11-27 07:14:15.788216] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:04.723 [2024-11-27 07:14:15.788218] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:04.723 [2024-11-27 07:14:15.788223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:04.723 [2024-11-27 07:14:15.788229] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:19:04.723 [2024-11-27 07:14:15.788234] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:19:04.723 [2024-11-27 07:14:15.788236] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:04.723 [2024-11-27 07:14:15.788241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:19:04.723 [2024-11-27 07:14:15.796164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:19:04.723 [2024-11-27 07:14:15.796175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:19:04.723 [2024-11-27 07:14:15.796183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:19:04.723 [2024-11-27 07:14:15.796188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:19:04.723 ===================================================== 00:19:04.723 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:04.723 ===================================================== 00:19:04.723 Controller Capabilities/Features 00:19:04.723 ================================ 00:19:04.723 Vendor ID: 4e58 00:19:04.723 Subsystem Vendor ID: 4e58 00:19:04.723 Serial Number: SPDK2 00:19:04.723 Model Number: SPDK bdev Controller 00:19:04.723 Firmware Version: 25.01 00:19:04.723 Recommended Arb Burst: 6 00:19:04.723 IEEE OUI Identifier: 8d 6b 50 00:19:04.723 Multi-path I/O 00:19:04.723 May have multiple subsystem ports: Yes 00:19:04.723 May have multiple controllers: Yes 00:19:04.723 Associated with SR-IOV VF: No 00:19:04.723 Max Data Transfer Size: 131072 00:19:04.723 Max Number of Namespaces: 32 00:19:04.723 Max Number of I/O Queues: 127 00:19:04.723 NVMe Specification Version (VS): 1.3 00:19:04.723 NVMe Specification Version (Identify): 1.3 00:19:04.723 Maximum Queue Entries: 256 00:19:04.723 Contiguous Queues Required: Yes 00:19:04.723 Arbitration Mechanisms Supported 00:19:04.723 Weighted Round Robin: Not Supported 00:19:04.723 Vendor Specific: Not Supported 00:19:04.723 Reset Timeout: 15000 ms 00:19:04.723 Doorbell Stride: 4 bytes 00:19:04.723 NVM Subsystem Reset: Not Supported 00:19:04.723 Command Sets Supported 00:19:04.723 NVM Command Set: Supported 00:19:04.723 Boot Partition: Not Supported 00:19:04.723 Memory Page Size Minimum: 4096 bytes 00:19:04.723 Memory Page Size Maximum: 4096 bytes 00:19:04.723 Persistent Memory Region: Not Supported 00:19:04.723 Optional Asynchronous Events Supported 00:19:04.723 Namespace Attribute Notices: Supported 00:19:04.723 Firmware Activation Notices: Not Supported 00:19:04.723 ANA Change Notices: Not Supported 00:19:04.723 PLE Aggregate Log Change Notices: Not Supported 00:19:04.723 LBA Status Info Alert Notices: Not Supported 00:19:04.723 EGE Aggregate Log Change Notices: Not Supported 00:19:04.723 Normal NVM Subsystem Shutdown event: Not Supported 00:19:04.723 Zone Descriptor Change Notices: Not Supported 00:19:04.723 Discovery Log Change Notices: Not Supported 00:19:04.723 Controller Attributes 00:19:04.723 128-bit Host Identifier: Supported 00:19:04.723 Non-Operational Permissive Mode: Not Supported 00:19:04.723 NVM Sets: Not Supported 00:19:04.723 Read Recovery Levels: Not Supported 00:19:04.723 Endurance Groups: Not Supported 00:19:04.723 Predictable Latency Mode: Not Supported 00:19:04.723 Traffic Based Keep ALive: Not Supported 00:19:04.723 Namespace Granularity: Not Supported 00:19:04.723 SQ Associations: Not Supported 00:19:04.723 UUID List: Not Supported 00:19:04.723 Multi-Domain Subsystem: Not Supported 00:19:04.723 Fixed Capacity Management: Not Supported 00:19:04.723 Variable Capacity Management: Not Supported 00:19:04.723 Delete Endurance Group: Not Supported 00:19:04.723 Delete NVM Set: Not Supported 00:19:04.723 Extended LBA Formats Supported: Not Supported 00:19:04.723 Flexible Data Placement Supported: Not Supported 00:19:04.723 00:19:04.723 Controller Memory Buffer Support 00:19:04.723 ================================ 00:19:04.723 Supported: No 00:19:04.723 00:19:04.723 Persistent Memory Region Support 00:19:04.723 ================================ 00:19:04.723 Supported: No 00:19:04.723 00:19:04.723 Admin Command Set Attributes 00:19:04.723 ============================ 00:19:04.723 Security Send/Receive: Not Supported 00:19:04.723 Format NVM: Not Supported 00:19:04.723 Firmware Activate/Download: Not Supported 00:19:04.723 Namespace Management: Not Supported 00:19:04.723 Device Self-Test: Not Supported 00:19:04.723 Directives: Not Supported 00:19:04.723 NVMe-MI: Not Supported 00:19:04.724 Virtualization Management: Not Supported 00:19:04.724 Doorbell Buffer Config: Not Supported 00:19:04.724 Get LBA Status Capability: Not Supported 00:19:04.724 Command & Feature Lockdown Capability: Not Supported 00:19:04.724 Abort Command Limit: 4 00:19:04.724 Async Event Request Limit: 4 00:19:04.724 Number of Firmware Slots: N/A 00:19:04.724 Firmware Slot 1 Read-Only: N/A 00:19:04.724 Firmware Activation Without Reset: N/A 00:19:04.724 Multiple Update Detection Support: N/A 00:19:04.724 Firmware Update Granularity: No Information Provided 00:19:04.724 Per-Namespace SMART Log: No 00:19:04.724 Asymmetric Namespace Access Log Page: Not Supported 00:19:04.724 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:19:04.724 Command Effects Log Page: Supported 00:19:04.724 Get Log Page Extended Data: Supported 00:19:04.724 Telemetry Log Pages: Not Supported 00:19:04.724 Persistent Event Log Pages: Not Supported 00:19:04.724 Supported Log Pages Log Page: May Support 00:19:04.724 Commands Supported & Effects Log Page: Not Supported 00:19:04.724 Feature Identifiers & Effects Log Page:May Support 00:19:04.724 NVMe-MI Commands & Effects Log Page: May Support 00:19:04.724 Data Area 4 for Telemetry Log: Not Supported 00:19:04.724 Error Log Page Entries Supported: 128 00:19:04.724 Keep Alive: Supported 00:19:04.724 Keep Alive Granularity: 10000 ms 00:19:04.724 00:19:04.724 NVM Command Set Attributes 00:19:04.724 ========================== 00:19:04.724 Submission Queue Entry Size 00:19:04.724 Max: 64 00:19:04.724 Min: 64 00:19:04.724 Completion Queue Entry Size 00:19:04.724 Max: 16 00:19:04.724 Min: 16 00:19:04.724 Number of Namespaces: 32 00:19:04.724 Compare Command: Supported 00:19:04.724 Write Uncorrectable Command: Not Supported 00:19:04.724 Dataset Management Command: Supported 00:19:04.724 Write Zeroes Command: Supported 00:19:04.724 Set Features Save Field: Not Supported 00:19:04.724 Reservations: Not Supported 00:19:04.724 Timestamp: Not Supported 00:19:04.724 Copy: Supported 00:19:04.724 Volatile Write Cache: Present 00:19:04.724 Atomic Write Unit (Normal): 1 00:19:04.724 Atomic Write Unit (PFail): 1 00:19:04.724 Atomic Compare & Write Unit: 1 00:19:04.724 Fused Compare & Write: Supported 00:19:04.724 Scatter-Gather List 00:19:04.724 SGL Command Set: Supported (Dword aligned) 00:19:04.724 SGL Keyed: Not Supported 00:19:04.724 SGL Bit Bucket Descriptor: Not Supported 00:19:04.724 SGL Metadata Pointer: Not Supported 00:19:04.724 Oversized SGL: Not Supported 00:19:04.724 SGL Metadata Address: Not Supported 00:19:04.724 SGL Offset: Not Supported 00:19:04.724 Transport SGL Data Block: Not Supported 00:19:04.724 Replay Protected Memory Block: Not Supported 00:19:04.724 00:19:04.724 Firmware Slot Information 00:19:04.724 ========================= 00:19:04.724 Active slot: 1 00:19:04.724 Slot 1 Firmware Revision: 25.01 00:19:04.724 00:19:04.724 00:19:04.724 Commands Supported and Effects 00:19:04.724 ============================== 00:19:04.724 Admin Commands 00:19:04.724 -------------- 00:19:04.724 Get Log Page (02h): Supported 00:19:04.724 Identify (06h): Supported 00:19:04.724 Abort (08h): Supported 00:19:04.724 Set Features (09h): Supported 00:19:04.724 Get Features (0Ah): Supported 00:19:04.724 Asynchronous Event Request (0Ch): Supported 00:19:04.724 Keep Alive (18h): Supported 00:19:04.724 I/O Commands 00:19:04.724 ------------ 00:19:04.724 Flush (00h): Supported LBA-Change 00:19:04.724 Write (01h): Supported LBA-Change 00:19:04.724 Read (02h): Supported 00:19:04.724 Compare (05h): Supported 00:19:04.724 Write Zeroes (08h): Supported LBA-Change 00:19:04.724 Dataset Management (09h): Supported LBA-Change 00:19:04.724 Copy (19h): Supported LBA-Change 00:19:04.724 00:19:04.724 Error Log 00:19:04.724 ========= 00:19:04.724 00:19:04.724 Arbitration 00:19:04.724 =========== 00:19:04.724 Arbitration Burst: 1 00:19:04.724 00:19:04.724 Power Management 00:19:04.724 ================ 00:19:04.724 Number of Power States: 1 00:19:04.724 Current Power State: Power State #0 00:19:04.724 Power State #0: 00:19:04.724 Max Power: 0.00 W 00:19:04.724 Non-Operational State: Operational 00:19:04.724 Entry Latency: Not Reported 00:19:04.724 Exit Latency: Not Reported 00:19:04.724 Relative Read Throughput: 0 00:19:04.724 Relative Read Latency: 0 00:19:04.724 Relative Write Throughput: 0 00:19:04.724 Relative Write Latency: 0 00:19:04.724 Idle Power: Not Reported 00:19:04.724 Active Power: Not Reported 00:19:04.724 Non-Operational Permissive Mode: Not Supported 00:19:04.724 00:19:04.724 Health Information 00:19:04.724 ================== 00:19:04.724 Critical Warnings: 00:19:04.724 Available Spare Space: OK 00:19:04.724 Temperature: OK 00:19:04.724 Device Reliability: OK 00:19:04.724 Read Only: No 00:19:04.724 Volatile Memory Backup: OK 00:19:04.724 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:04.724 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:04.724 Available Spare: 0% 00:19:04.724 Available Sp[2024-11-27 07:14:15.796262] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:19:04.724 [2024-11-27 07:14:15.804165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:19:04.724 [2024-11-27 07:14:15.804189] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:19:04.724 [2024-11-27 07:14:15.804196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.724 [2024-11-27 07:14:15.804201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.724 [2024-11-27 07:14:15.804205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.724 [2024-11-27 07:14:15.804210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.725 [2024-11-27 07:14:15.804238] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:04.725 [2024-11-27 07:14:15.804245] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:19:04.725 [2024-11-27 07:14:15.805250] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:04.725 [2024-11-27 07:14:15.805285] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:19:04.725 [2024-11-27 07:14:15.805291] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:19:04.725 [2024-11-27 07:14:15.806248] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:19:04.725 [2024-11-27 07:14:15.806257] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:19:04.725 [2024-11-27 07:14:15.806300] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:19:04.725 [2024-11-27 07:14:15.807271] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:04.725 are Threshold: 0% 00:19:04.725 Life Percentage Used: 0% 00:19:04.725 Data Units Read: 0 00:19:04.725 Data Units Written: 0 00:19:04.725 Host Read Commands: 0 00:19:04.725 Host Write Commands: 0 00:19:04.725 Controller Busy Time: 0 minutes 00:19:04.725 Power Cycles: 0 00:19:04.725 Power On Hours: 0 hours 00:19:04.725 Unsafe Shutdowns: 0 00:19:04.725 Unrecoverable Media Errors: 0 00:19:04.725 Lifetime Error Log Entries: 0 00:19:04.725 Warning Temperature Time: 0 minutes 00:19:04.725 Critical Temperature Time: 0 minutes 00:19:04.725 00:19:04.725 Number of Queues 00:19:04.725 ================ 00:19:04.725 Number of I/O Submission Queues: 127 00:19:04.725 Number of I/O Completion Queues: 127 00:19:04.725 00:19:04.725 Active Namespaces 00:19:04.725 ================= 00:19:04.725 Namespace ID:1 00:19:04.725 Error Recovery Timeout: Unlimited 00:19:04.725 Command Set Identifier: NVM (00h) 00:19:04.725 Deallocate: Supported 00:19:04.725 Deallocated/Unwritten Error: Not Supported 00:19:04.725 Deallocated Read Value: Unknown 00:19:04.725 Deallocate in Write Zeroes: Not Supported 00:19:04.725 Deallocated Guard Field: 0xFFFF 00:19:04.725 Flush: Supported 00:19:04.725 Reservation: Supported 00:19:04.725 Namespace Sharing Capabilities: Multiple Controllers 00:19:04.725 Size (in LBAs): 131072 (0GiB) 00:19:04.725 Capacity (in LBAs): 131072 (0GiB) 00:19:04.725 Utilization (in LBAs): 131072 (0GiB) 00:19:04.725 NGUID: EB8FB61D97504721BD3FE1B9428AC58F 00:19:04.725 UUID: eb8fb61d-9750-4721-bd3f-e1b9428ac58f 00:19:04.725 Thin Provisioning: Not Supported 00:19:04.725 Per-NS Atomic Units: Yes 00:19:04.725 Atomic Boundary Size (Normal): 0 00:19:04.725 Atomic Boundary Size (PFail): 0 00:19:04.725 Atomic Boundary Offset: 0 00:19:04.725 Maximum Single Source Range Length: 65535 00:19:04.725 Maximum Copy Length: 65535 00:19:04.725 Maximum Source Range Count: 1 00:19:04.725 NGUID/EUI64 Never Reused: No 00:19:04.725 Namespace Write Protected: No 00:19:04.725 Number of LBA Formats: 1 00:19:04.725 Current LBA Format: LBA Format #00 00:19:04.725 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:04.725 00:19:04.725 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:04.986 [2024-11-27 07:14:15.994547] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:10.276 Initializing NVMe Controllers 00:19:10.276 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:10.276 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:10.276 Initialization complete. Launching workers. 00:19:10.276 ======================================================== 00:19:10.276 Latency(us) 00:19:10.276 Device Information : IOPS MiB/s Average min max 00:19:10.276 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39981.00 156.18 3203.90 850.43 6800.72 00:19:10.276 ======================================================== 00:19:10.276 Total : 39981.00 156.18 3203.90 850.43 6800.72 00:19:10.276 00:19:10.276 [2024-11-27 07:14:21.104359] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:10.276 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:19:10.276 [2024-11-27 07:14:21.293923] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:15.565 Initializing NVMe Controllers 00:19:15.565 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:15.565 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:15.565 Initialization complete. Launching workers. 00:19:15.565 ======================================================== 00:19:15.565 Latency(us) 00:19:15.565 Device Information : IOPS MiB/s Average min max 00:19:15.565 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39973.78 156.15 3201.97 865.67 8756.34 00:19:15.565 ======================================================== 00:19:15.565 Total : 39973.78 156.15 3201.97 865.67 8756.34 00:19:15.565 00:19:15.565 [2024-11-27 07:14:26.313502] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:15.565 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:15.566 [2024-11-27 07:14:26.507676] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:20.980 [2024-11-27 07:14:31.643245] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:20.980 Initializing NVMe Controllers 00:19:20.980 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:20.980 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:20.980 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:19:20.980 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:19:20.980 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:19:20.980 Initialization complete. Launching workers. 00:19:20.980 Starting thread on core 2 00:19:20.980 Starting thread on core 3 00:19:20.980 Starting thread on core 1 00:19:20.981 07:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:19:20.981 [2024-11-27 07:14:31.893501] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:24.283 [2024-11-27 07:14:34.969414] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:24.283 Initializing NVMe Controllers 00:19:24.283 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:24.284 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:24.284 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:19:24.284 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:19:24.284 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:19:24.284 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:19:24.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:24.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:24.284 Initialization complete. Launching workers. 00:19:24.284 Starting thread on core 1 with urgent priority queue 00:19:24.284 Starting thread on core 2 with urgent priority queue 00:19:24.284 Starting thread on core 3 with urgent priority queue 00:19:24.284 Starting thread on core 0 with urgent priority queue 00:19:24.284 SPDK bdev Controller (SPDK2 ) core 0: 14525.67 IO/s 6.88 secs/100000 ios 00:19:24.284 SPDK bdev Controller (SPDK2 ) core 1: 8321.33 IO/s 12.02 secs/100000 ios 00:19:24.284 SPDK bdev Controller (SPDK2 ) core 2: 9624.67 IO/s 10.39 secs/100000 ios 00:19:24.284 SPDK bdev Controller (SPDK2 ) core 3: 11251.33 IO/s 8.89 secs/100000 ios 00:19:24.284 ======================================================== 00:19:24.284 00:19:24.284 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:24.284 [2024-11-27 07:14:35.220532] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:24.284 Initializing NVMe Controllers 00:19:24.284 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:24.284 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:24.284 Namespace ID: 1 size: 0GB 00:19:24.284 Initialization complete. 00:19:24.284 INFO: using host memory buffer for IO 00:19:24.284 Hello world! 00:19:24.284 [2024-11-27 07:14:35.230601] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:24.284 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:24.284 [2024-11-27 07:14:35.470898] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:25.670 Initializing NVMe Controllers 00:19:25.670 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:25.670 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:25.670 Initialization complete. Launching workers. 00:19:25.670 submit (in ns) avg, min, max = 6418.5, 2828.3, 4000554.2 00:19:25.670 complete (in ns) avg, min, max = 16970.8, 1628.3, 3998540.0 00:19:25.670 00:19:25.670 Submit histogram 00:19:25.670 ================ 00:19:25.670 Range in us Cumulative Count 00:19:25.670 2.827 - 2.840: 0.3190% ( 66) 00:19:25.670 2.840 - 2.853: 1.3435% ( 212) 00:19:25.670 2.853 - 2.867: 4.1949% ( 590) 00:19:25.670 2.867 - 2.880: 8.7038% ( 933) 00:19:25.670 2.880 - 2.893: 14.2905% ( 1156) 00:19:25.670 2.893 - 2.907: 19.0460% ( 984) 00:19:25.670 2.907 - 2.920: 24.8357% ( 1198) 00:19:25.670 2.920 - 2.933: 30.2049% ( 1111) 00:19:25.670 2.933 - 2.947: 36.8065% ( 1366) 00:19:25.670 2.947 - 2.960: 41.6973% ( 1012) 00:19:25.670 2.960 - 2.973: 46.2546% ( 943) 00:19:25.670 2.973 - 2.987: 51.4595% ( 1077) 00:19:25.670 2.987 - 3.000: 59.4239% ( 1648) 00:19:25.670 3.000 - 3.013: 69.0073% ( 1983) 00:19:25.670 3.013 - 3.027: 78.2380% ( 1910) 00:19:25.670 3.027 - 3.040: 85.5016% ( 1503) 00:19:25.670 3.040 - 3.053: 91.1657% ( 1172) 00:19:25.670 3.053 - 3.067: 94.9014% ( 773) 00:19:25.670 3.067 - 3.080: 97.1197% ( 459) 00:19:25.670 3.080 - 3.093: 98.3617% ( 257) 00:19:25.670 3.093 - 3.107: 99.1301% ( 159) 00:19:25.670 3.107 - 3.120: 99.3911% ( 54) 00:19:25.670 3.120 - 3.133: 99.4926% ( 21) 00:19:25.670 3.133 - 3.147: 99.5554% ( 13) 00:19:25.670 3.147 - 3.160: 99.5699% ( 3) 00:19:25.670 3.187 - 3.200: 99.5747% ( 1) 00:19:25.670 3.213 - 3.227: 99.5844% ( 2) 00:19:25.670 3.293 - 3.307: 99.5892% ( 1) 00:19:25.670 3.520 - 3.547: 99.5940% ( 1) 00:19:25.670 3.733 - 3.760: 99.5989% ( 1) 00:19:25.670 3.787 - 3.813: 99.6037% ( 1) 00:19:25.670 3.813 - 3.840: 99.6085% ( 1) 00:19:25.670 4.107 - 4.133: 99.6134% ( 1) 00:19:25.670 4.187 - 4.213: 99.6182% ( 1) 00:19:25.670 4.213 - 4.240: 99.6230% ( 1) 00:19:25.670 4.373 - 4.400: 99.6279% ( 1) 00:19:25.670 4.507 - 4.533: 99.6375% ( 2) 00:19:25.670 4.560 - 4.587: 99.6424% ( 1) 00:19:25.670 4.587 - 4.613: 99.6472% ( 1) 00:19:25.670 4.693 - 4.720: 99.6520% ( 1) 00:19:25.670 4.747 - 4.773: 99.6569% ( 1) 00:19:25.670 4.773 - 4.800: 99.6665% ( 2) 00:19:25.670 4.800 - 4.827: 99.6762% ( 2) 00:19:25.670 4.827 - 4.853: 99.6810% ( 1) 00:19:25.670 4.853 - 4.880: 99.6859% ( 1) 00:19:25.670 4.907 - 4.933: 99.7004% ( 3) 00:19:25.670 4.933 - 4.960: 99.7052% ( 1) 00:19:25.670 4.960 - 4.987: 99.7100% ( 1) 00:19:25.670 4.987 - 5.013: 99.7245% ( 3) 00:19:25.670 5.013 - 5.040: 99.7294% ( 1) 00:19:25.670 5.040 - 5.067: 99.7439% ( 3) 00:19:25.670 5.120 - 5.147: 99.7487% ( 1) 00:19:25.670 5.147 - 5.173: 99.7584% ( 2) 00:19:25.670 5.173 - 5.200: 99.7632% ( 1) 00:19:25.670 5.227 - 5.253: 99.7680% ( 1) 00:19:25.670 5.307 - 5.333: 99.7729% ( 1) 00:19:25.670 5.333 - 5.360: 99.7777% ( 1) 00:19:25.670 5.493 - 5.520: 99.7874% ( 2) 00:19:25.670 5.627 - 5.653: 99.7922% ( 1) 00:19:25.670 5.707 - 5.733: 99.8019% ( 2) 00:19:25.670 5.760 - 5.787: 99.8164% ( 3) 00:19:25.670 5.813 - 5.840: 99.8212% ( 1) 00:19:25.670 5.840 - 5.867: 99.8260% ( 1) 00:19:25.670 6.000 - 6.027: 99.8309% ( 1) 00:19:25.670 6.053 - 6.080: 99.8357% ( 1) 00:19:25.670 6.133 - 6.160: 99.8405% ( 1) 00:19:25.670 6.400 - 6.427: 99.8454% ( 1) 00:19:25.670 6.427 - 6.453: 99.8502% ( 1) 00:19:25.670 6.480 - 6.507: 99.8550% ( 1) 00:19:25.670 6.560 - 6.587: 99.8647% ( 2) 00:19:25.670 6.613 - 6.640: 99.8695% ( 1) 00:19:25.670 6.667 - 6.693: 99.8743% ( 1) 00:19:25.670 6.827 - 6.880: 99.8792% ( 1) 00:19:25.670 7.253 - 7.307: 99.8840% ( 1) 00:19:25.670 7.467 - 7.520: 99.8888% ( 1) 00:19:25.670 8.267 - 8.320: 99.8937% ( 1) 00:19:25.670 8.533 - 8.587: 99.8985% ( 1) 00:19:25.670 9.707 - 9.760: 99.9033% ( 1) 00:19:25.670 10.987 - 11.040: 99.9082% ( 1) 00:19:25.670 [2024-11-27 07:14:36.571679] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:25.670 11.307 - 11.360: 99.9130% ( 1) 00:19:25.670 3222.187 - 3235.840: 99.9178% ( 1) 00:19:25.670 3986.773 - 4014.080: 100.0000% ( 17) 00:19:25.670 00:19:25.670 Complete histogram 00:19:25.670 ================== 00:19:25.670 Range in us Cumulative Count 00:19:25.670 1.627 - 1.633: 0.0048% ( 1) 00:19:25.670 1.640 - 1.647: 0.6428% ( 132) 00:19:25.670 1.647 - 1.653: 1.1744% ( 110) 00:19:25.670 1.653 - 1.660: 1.2662% ( 19) 00:19:25.670 1.660 - 1.667: 1.3870% ( 25) 00:19:25.670 1.667 - 1.673: 1.4208% ( 7) 00:19:25.670 1.673 - 1.680: 1.4692% ( 10) 00:19:25.670 1.680 - 1.687: 1.4837% ( 3) 00:19:25.670 1.687 - 1.693: 1.5078% ( 5) 00:19:25.670 1.693 - 1.700: 6.5388% ( 1041) 00:19:25.670 1.700 - 1.707: 55.1276% ( 10054) 00:19:25.670 1.707 - 1.720: 70.8680% ( 3257) 00:19:25.670 1.720 - 1.733: 81.3261% ( 2164) 00:19:25.670 1.733 - 1.747: 83.8875% ( 530) 00:19:25.670 1.747 - 1.760: 85.5161% ( 337) 00:19:25.670 1.760 - 1.773: 89.7303% ( 872) 00:19:25.670 1.773 - 1.787: 95.2445% ( 1141) 00:19:25.670 1.787 - 1.800: 98.0186% ( 574) 00:19:25.670 1.800 - 1.813: 99.0624% ( 216) 00:19:25.670 1.813 - 1.827: 99.3427% ( 58) 00:19:25.670 1.827 - 1.840: 99.3766% ( 7) 00:19:25.670 1.840 - 1.853: 99.3911% ( 3) 00:19:25.670 1.853 - 1.867: 99.3959% ( 1) 00:19:25.670 2.067 - 2.080: 99.4007% ( 1) 00:19:25.670 3.373 - 3.387: 99.4056% ( 1) 00:19:25.670 3.387 - 3.400: 99.4104% ( 1) 00:19:25.670 3.413 - 3.440: 99.4152% ( 1) 00:19:25.670 3.493 - 3.520: 99.4201% ( 1) 00:19:25.670 3.520 - 3.547: 99.4297% ( 2) 00:19:25.670 3.547 - 3.573: 99.4346% ( 1) 00:19:25.670 3.573 - 3.600: 99.4394% ( 1) 00:19:25.670 3.653 - 3.680: 99.4442% ( 1) 00:19:25.670 3.787 - 3.813: 99.4491% ( 1) 00:19:25.670 3.813 - 3.840: 99.4539% ( 1) 00:19:25.670 3.840 - 3.867: 99.4587% ( 1) 00:19:25.670 3.920 - 3.947: 99.4684% ( 2) 00:19:25.670 4.347 - 4.373: 99.4732% ( 1) 00:19:25.670 4.453 - 4.480: 99.4781% ( 1) 00:19:25.670 4.480 - 4.507: 99.4829% ( 1) 00:19:25.670 4.613 - 4.640: 99.4877% ( 1) 00:19:25.670 4.667 - 4.693: 99.4974% ( 2) 00:19:25.670 4.693 - 4.720: 99.5022% ( 1) 00:19:25.670 4.747 - 4.773: 99.5071% ( 1) 00:19:25.670 4.773 - 4.800: 99.5119% ( 1) 00:19:25.670 4.827 - 4.853: 99.5167% ( 1) 00:19:25.670 4.853 - 4.880: 99.5216% ( 1) 00:19:25.670 4.880 - 4.907: 99.5264% ( 1) 00:19:25.670 4.933 - 4.960: 99.5312% ( 1) 00:19:25.670 4.960 - 4.987: 99.5361% ( 1) 00:19:25.671 5.067 - 5.093: 99.5409% ( 1) 00:19:25.671 5.227 - 5.253: 99.5506% ( 2) 00:19:25.671 5.307 - 5.333: 99.5554% ( 1) 00:19:25.671 5.440 - 5.467: 99.5602% ( 1) 00:19:25.671 5.520 - 5.547: 99.5699% ( 2) 00:19:25.671 5.787 - 5.813: 99.5747% ( 1) 00:19:25.671 5.893 - 5.920: 99.5795% ( 1) 00:19:25.671 5.920 - 5.947: 99.5844% ( 1) 00:19:25.671 5.947 - 5.973: 99.5892% ( 1) 00:19:25.671 6.373 - 6.400: 99.5940% ( 1) 00:19:25.671 7.093 - 7.147: 99.5989% ( 1) 00:19:25.671 7.147 - 7.200: 99.6037% ( 1) 00:19:25.671 7.627 - 7.680: 99.6085% ( 1) 00:19:25.671 8.427 - 8.480: 99.6134% ( 1) 00:19:25.671 12.373 - 12.427: 99.6182% ( 1) 00:19:25.671 3877.547 - 3904.853: 99.6230% ( 1) 00:19:25.671 3986.773 - 4014.080: 100.0000% ( 78) 00:19:25.671 00:19:25.671 07:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:19:25.671 07:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:25.671 07:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:19:25.671 07:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:19:25.671 07:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:25.671 [ 00:19:25.671 { 00:19:25.671 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:25.671 "subtype": "Discovery", 00:19:25.671 "listen_addresses": [], 00:19:25.671 "allow_any_host": true, 00:19:25.671 "hosts": [] 00:19:25.671 }, 00:19:25.671 { 00:19:25.671 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:25.671 "subtype": "NVMe", 00:19:25.671 "listen_addresses": [ 00:19:25.671 { 00:19:25.671 "trtype": "VFIOUSER", 00:19:25.671 "adrfam": "IPv4", 00:19:25.671 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:25.671 "trsvcid": "0" 00:19:25.671 } 00:19:25.671 ], 00:19:25.671 "allow_any_host": true, 00:19:25.671 "hosts": [], 00:19:25.671 "serial_number": "SPDK1", 00:19:25.671 "model_number": "SPDK bdev Controller", 00:19:25.671 "max_namespaces": 32, 00:19:25.671 "min_cntlid": 1, 00:19:25.671 "max_cntlid": 65519, 00:19:25.671 "namespaces": [ 00:19:25.671 { 00:19:25.671 "nsid": 1, 00:19:25.671 "bdev_name": "Malloc1", 00:19:25.671 "name": "Malloc1", 00:19:25.671 "nguid": "87A003005ABA43F4809FA79F5859C8B1", 00:19:25.671 "uuid": "87a00300-5aba-43f4-809f-a79f5859c8b1" 00:19:25.671 }, 00:19:25.671 { 00:19:25.671 "nsid": 2, 00:19:25.671 "bdev_name": "Malloc3", 00:19:25.671 "name": "Malloc3", 00:19:25.671 "nguid": "ADACA57B0F9F4F71B9F99717984D0F0B", 00:19:25.671 "uuid": "adaca57b-0f9f-4f71-b9f9-9717984d0f0b" 00:19:25.671 } 00:19:25.671 ] 00:19:25.671 }, 00:19:25.671 { 00:19:25.671 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:25.671 "subtype": "NVMe", 00:19:25.671 "listen_addresses": [ 00:19:25.671 { 00:19:25.671 "trtype": "VFIOUSER", 00:19:25.671 "adrfam": "IPv4", 00:19:25.671 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:25.671 "trsvcid": "0" 00:19:25.671 } 00:19:25.671 ], 00:19:25.671 "allow_any_host": true, 00:19:25.671 "hosts": [], 00:19:25.671 "serial_number": "SPDK2", 00:19:25.671 "model_number": "SPDK bdev Controller", 00:19:25.671 "max_namespaces": 32, 00:19:25.671 "min_cntlid": 1, 00:19:25.671 "max_cntlid": 65519, 00:19:25.671 "namespaces": [ 00:19:25.671 { 00:19:25.671 "nsid": 1, 00:19:25.671 "bdev_name": "Malloc2", 00:19:25.671 "name": "Malloc2", 00:19:25.671 "nguid": "EB8FB61D97504721BD3FE1B9428AC58F", 00:19:25.671 "uuid": "eb8fb61d-9750-4721-bd3f-e1b9428ac58f" 00:19:25.671 } 00:19:25.671 ] 00:19:25.671 } 00:19:25.671 ] 00:19:25.671 07:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:25.671 07:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2353790 00:19:25.671 07:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:25.671 07:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:19:25.671 07:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:19:25.671 07:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:25.671 07:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:25.671 07:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:19:25.671 07:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:25.671 07:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:19:25.932 [2024-11-27 07:14:36.951536] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:25.932 Malloc4 00:19:25.933 07:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:19:25.933 [2024-11-27 07:14:37.131817] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:26.193 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:26.193 Asynchronous Event Request test 00:19:26.193 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:26.193 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:26.193 Registering asynchronous event callbacks... 00:19:26.193 Starting namespace attribute notice tests for all controllers... 00:19:26.193 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:26.193 aer_cb - Changed Namespace 00:19:26.193 Cleaning up... 00:19:26.193 [ 00:19:26.193 { 00:19:26.193 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:26.193 "subtype": "Discovery", 00:19:26.193 "listen_addresses": [], 00:19:26.193 "allow_any_host": true, 00:19:26.193 "hosts": [] 00:19:26.193 }, 00:19:26.193 { 00:19:26.193 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:26.193 "subtype": "NVMe", 00:19:26.193 "listen_addresses": [ 00:19:26.193 { 00:19:26.193 "trtype": "VFIOUSER", 00:19:26.193 "adrfam": "IPv4", 00:19:26.193 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:26.193 "trsvcid": "0" 00:19:26.193 } 00:19:26.193 ], 00:19:26.193 "allow_any_host": true, 00:19:26.193 "hosts": [], 00:19:26.193 "serial_number": "SPDK1", 00:19:26.193 "model_number": "SPDK bdev Controller", 00:19:26.193 "max_namespaces": 32, 00:19:26.193 "min_cntlid": 1, 00:19:26.193 "max_cntlid": 65519, 00:19:26.193 "namespaces": [ 00:19:26.193 { 00:19:26.193 "nsid": 1, 00:19:26.193 "bdev_name": "Malloc1", 00:19:26.193 "name": "Malloc1", 00:19:26.193 "nguid": "87A003005ABA43F4809FA79F5859C8B1", 00:19:26.193 "uuid": "87a00300-5aba-43f4-809f-a79f5859c8b1" 00:19:26.193 }, 00:19:26.193 { 00:19:26.193 "nsid": 2, 00:19:26.193 "bdev_name": "Malloc3", 00:19:26.193 "name": "Malloc3", 00:19:26.193 "nguid": "ADACA57B0F9F4F71B9F99717984D0F0B", 00:19:26.193 "uuid": "adaca57b-0f9f-4f71-b9f9-9717984d0f0b" 00:19:26.193 } 00:19:26.193 ] 00:19:26.193 }, 00:19:26.193 { 00:19:26.193 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:26.193 "subtype": "NVMe", 00:19:26.193 "listen_addresses": [ 00:19:26.193 { 00:19:26.193 "trtype": "VFIOUSER", 00:19:26.193 "adrfam": "IPv4", 00:19:26.193 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:26.193 "trsvcid": "0" 00:19:26.193 } 00:19:26.193 ], 00:19:26.193 "allow_any_host": true, 00:19:26.193 "hosts": [], 00:19:26.193 "serial_number": "SPDK2", 00:19:26.193 "model_number": "SPDK bdev Controller", 00:19:26.193 "max_namespaces": 32, 00:19:26.193 "min_cntlid": 1, 00:19:26.193 "max_cntlid": 65519, 00:19:26.193 "namespaces": [ 00:19:26.193 { 00:19:26.193 "nsid": 1, 00:19:26.193 "bdev_name": "Malloc2", 00:19:26.193 "name": "Malloc2", 00:19:26.193 "nguid": "EB8FB61D97504721BD3FE1B9428AC58F", 00:19:26.193 "uuid": "eb8fb61d-9750-4721-bd3f-e1b9428ac58f" 00:19:26.193 }, 00:19:26.193 { 00:19:26.193 "nsid": 2, 00:19:26.193 "bdev_name": "Malloc4", 00:19:26.193 "name": "Malloc4", 00:19:26.193 "nguid": "1371BC1B30AC4CDFBF43413DD0638660", 00:19:26.193 "uuid": "1371bc1b-30ac-4cdf-bf43-413dd0638660" 00:19:26.193 } 00:19:26.193 ] 00:19:26.193 } 00:19:26.193 ] 00:19:26.193 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2353790 00:19:26.193 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:19:26.193 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2344801 00:19:26.193 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2344801 ']' 00:19:26.193 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2344801 00:19:26.193 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:19:26.193 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:26.193 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2344801 00:19:26.454 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:26.454 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:26.454 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2344801' 00:19:26.454 killing process with pid 2344801 00:19:26.454 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2344801 00:19:26.454 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2344801 00:19:26.454 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:26.454 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:26.454 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:19:26.454 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:19:26.454 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:19:26.454 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2354099 00:19:26.454 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2354099' 00:19:26.454 Process pid: 2354099 00:19:26.454 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:26.454 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:19:26.454 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2354099 00:19:26.454 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2354099 ']' 00:19:26.454 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.454 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:26.454 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.454 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:26.454 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:26.454 [2024-11-27 07:14:37.605403] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:19:26.454 [2024-11-27 07:14:37.606347] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:19:26.454 [2024-11-27 07:14:37.606393] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:26.714 [2024-11-27 07:14:37.689491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:26.715 [2024-11-27 07:14:37.720968] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:26.715 [2024-11-27 07:14:37.721001] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:26.715 [2024-11-27 07:14:37.721006] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:26.715 [2024-11-27 07:14:37.721011] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:26.715 [2024-11-27 07:14:37.721016] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:26.715 [2024-11-27 07:14:37.722273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:26.715 [2024-11-27 07:14:37.722424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.715 [2024-11-27 07:14:37.722572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.715 [2024-11-27 07:14:37.722575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:26.715 [2024-11-27 07:14:37.774544] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:19:26.715 [2024-11-27 07:14:37.775582] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:19:26.715 [2024-11-27 07:14:37.776354] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:19:26.715 [2024-11-27 07:14:37.776983] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:19:26.715 [2024-11-27 07:14:37.777000] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:19:27.285 07:14:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:27.286 07:14:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:19:27.286 07:14:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:28.228 07:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:19:28.490 07:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:28.490 07:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:28.490 07:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:28.490 07:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:28.490 07:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:28.750 Malloc1 00:19:28.750 07:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:29.010 07:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:29.271 07:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:29.271 07:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:29.271 07:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:29.271 07:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:29.531 Malloc2 00:19:29.531 07:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:29.792 07:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:29.792 07:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:30.052 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:19:30.052 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2354099 00:19:30.052 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2354099 ']' 00:19:30.052 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2354099 00:19:30.052 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:19:30.052 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:30.052 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2354099 00:19:30.052 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:30.052 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:30.052 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2354099' 00:19:30.052 killing process with pid 2354099 00:19:30.052 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2354099 00:19:30.052 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2354099 00:19:30.313 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:30.313 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:30.313 00:19:30.313 real 0m51.038s 00:19:30.313 user 3m15.578s 00:19:30.313 sys 0m2.697s 00:19:30.313 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:30.313 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:30.313 ************************************ 00:19:30.313 END TEST nvmf_vfio_user 00:19:30.313 ************************************ 00:19:30.313 07:14:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:30.313 07:14:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:30.313 07:14:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:30.313 07:14:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:30.313 ************************************ 00:19:30.313 START TEST nvmf_vfio_user_nvme_compliance 00:19:30.313 ************************************ 00:19:30.313 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:30.313 * Looking for test storage... 00:19:30.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:19:30.313 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:30.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.575 --rc genhtml_branch_coverage=1 00:19:30.575 --rc genhtml_function_coverage=1 00:19:30.575 --rc genhtml_legend=1 00:19:30.575 --rc geninfo_all_blocks=1 00:19:30.575 --rc geninfo_unexecuted_blocks=1 00:19:30.575 00:19:30.575 ' 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:30.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.575 --rc genhtml_branch_coverage=1 00:19:30.575 --rc genhtml_function_coverage=1 00:19:30.575 --rc genhtml_legend=1 00:19:30.575 --rc geninfo_all_blocks=1 00:19:30.575 --rc geninfo_unexecuted_blocks=1 00:19:30.575 00:19:30.575 ' 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:30.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.575 --rc genhtml_branch_coverage=1 00:19:30.575 --rc genhtml_function_coverage=1 00:19:30.575 --rc genhtml_legend=1 00:19:30.575 --rc geninfo_all_blocks=1 00:19:30.575 --rc geninfo_unexecuted_blocks=1 00:19:30.575 00:19:30.575 ' 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:30.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.575 --rc genhtml_branch_coverage=1 00:19:30.575 --rc genhtml_function_coverage=1 00:19:30.575 --rc genhtml_legend=1 00:19:30.575 --rc geninfo_all_blocks=1 00:19:30.575 --rc geninfo_unexecuted_blocks=1 00:19:30.575 00:19:30.575 ' 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:30.575 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:30.576 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2354886 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2354886' 00:19:30.576 Process pid: 2354886 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2354886 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 2354886 ']' 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:30.576 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:30.576 [2024-11-27 07:14:41.708154] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:19:30.576 [2024-11-27 07:14:41.708237] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:30.838 [2024-11-27 07:14:41.793766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:30.838 [2024-11-27 07:14:41.828306] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:30.838 [2024-11-27 07:14:41.828336] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:30.838 [2024-11-27 07:14:41.828342] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:30.838 [2024-11-27 07:14:41.828347] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:30.838 [2024-11-27 07:14:41.828351] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:30.838 [2024-11-27 07:14:41.829556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.838 [2024-11-27 07:14:41.829707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.838 [2024-11-27 07:14:41.829710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.415 07:14:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:31.415 07:14:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:19:31.415 07:14:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:19:32.354 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:32.354 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:19:32.354 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:32.354 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.354 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:32.354 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.354 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:19:32.354 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:32.354 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.354 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:32.614 malloc0 00:19:32.614 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.614 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:19:32.614 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.614 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:32.614 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.614 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:32.614 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.614 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:32.614 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.614 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:32.614 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.614 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:32.614 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.614 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:19:32.614 00:19:32.614 00:19:32.614 CUnit - A unit testing framework for C - Version 2.1-3 00:19:32.614 http://cunit.sourceforge.net/ 00:19:32.614 00:19:32.614 00:19:32.614 Suite: nvme_compliance 00:19:32.614 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-27 07:14:43.758182] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:32.614 [2024-11-27 07:14:43.759482] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:19:32.614 [2024-11-27 07:14:43.759493] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:19:32.614 [2024-11-27 07:14:43.759498] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:19:32.614 [2024-11-27 07:14:43.761203] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:32.614 passed 00:19:32.873 Test: admin_identify_ctrlr_verify_fused ...[2024-11-27 07:14:43.836707] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:32.873 [2024-11-27 07:14:43.839728] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:32.873 passed 00:19:32.873 Test: admin_identify_ns ...[2024-11-27 07:14:43.918288] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:32.873 [2024-11-27 07:14:43.980166] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:19:32.873 [2024-11-27 07:14:43.988172] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:19:32.873 [2024-11-27 07:14:44.009248] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:32.873 passed 00:19:33.133 Test: admin_get_features_mandatory_features ...[2024-11-27 07:14:44.082491] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:33.133 [2024-11-27 07:14:44.085507] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:33.133 passed 00:19:33.133 Test: admin_get_features_optional_features ...[2024-11-27 07:14:44.162973] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:33.133 [2024-11-27 07:14:44.165992] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:33.133 passed 00:19:33.133 Test: admin_set_features_number_of_queues ...[2024-11-27 07:14:44.240512] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:33.393 [2024-11-27 07:14:44.345245] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:33.393 passed 00:19:33.393 Test: admin_get_log_page_mandatory_logs ...[2024-11-27 07:14:44.421310] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:33.393 [2024-11-27 07:14:44.424334] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:33.393 passed 00:19:33.393 Test: admin_get_log_page_with_lpo ...[2024-11-27 07:14:44.501549] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:33.393 [2024-11-27 07:14:44.570166] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:19:33.393 [2024-11-27 07:14:44.583209] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:33.653 passed 00:19:33.653 Test: fabric_property_get ...[2024-11-27 07:14:44.657450] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:33.653 [2024-11-27 07:14:44.658659] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:19:33.653 [2024-11-27 07:14:44.660475] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:33.653 passed 00:19:33.653 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-27 07:14:44.734949] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:33.653 [2024-11-27 07:14:44.736148] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:19:33.653 [2024-11-27 07:14:44.737974] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:33.653 passed 00:19:33.653 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-27 07:14:44.814721] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:33.913 [2024-11-27 07:14:44.899165] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:33.913 [2024-11-27 07:14:44.915164] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:33.913 [2024-11-27 07:14:44.920244] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:33.913 passed 00:19:33.913 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-27 07:14:44.993478] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:33.913 [2024-11-27 07:14:44.994675] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:19:33.913 [2024-11-27 07:14:44.996494] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:33.913 passed 00:19:33.913 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-27 07:14:45.071512] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:34.173 [2024-11-27 07:14:45.151171] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:34.173 [2024-11-27 07:14:45.175165] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:34.173 [2024-11-27 07:14:45.180243] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:34.173 passed 00:19:34.173 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-27 07:14:45.254467] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:34.173 [2024-11-27 07:14:45.255672] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:19:34.173 [2024-11-27 07:14:45.255689] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:19:34.173 [2024-11-27 07:14:45.257485] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:34.173 passed 00:19:34.173 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-27 07:14:45.332196] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:34.433 [2024-11-27 07:14:45.424165] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:19:34.433 [2024-11-27 07:14:45.432166] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:19:34.433 [2024-11-27 07:14:45.440165] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:19:34.433 [2024-11-27 07:14:45.448167] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:19:34.433 [2024-11-27 07:14:45.477227] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:34.433 passed 00:19:34.433 Test: admin_create_io_sq_verify_pc ...[2024-11-27 07:14:45.550408] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:34.434 [2024-11-27 07:14:45.567169] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:19:34.434 [2024-11-27 07:14:45.584557] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:34.434 passed 00:19:34.694 Test: admin_create_io_qp_max_qps ...[2024-11-27 07:14:45.660017] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:35.638 [2024-11-27 07:14:46.775167] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:19:36.209 [2024-11-27 07:14:47.171133] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:36.209 passed 00:19:36.209 Test: admin_create_io_sq_shared_cq ...[2024-11-27 07:14:47.244924] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:36.209 [2024-11-27 07:14:47.377165] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:36.470 [2024-11-27 07:14:47.414217] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:36.470 passed 00:19:36.470 00:19:36.470 Run Summary: Type Total Ran Passed Failed Inactive 00:19:36.470 suites 1 1 n/a 0 0 00:19:36.470 tests 18 18 18 0 0 00:19:36.470 asserts 360 360 360 0 n/a 00:19:36.470 00:19:36.470 Elapsed time = 1.502 seconds 00:19:36.470 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2354886 00:19:36.470 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 2354886 ']' 00:19:36.470 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 2354886 00:19:36.470 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:19:36.470 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:36.470 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2354886 00:19:36.470 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:36.470 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:36.470 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2354886' 00:19:36.470 killing process with pid 2354886 00:19:36.470 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 2354886 00:19:36.470 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 2354886 00:19:36.470 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:19:36.470 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:36.470 00:19:36.470 real 0m6.228s 00:19:36.470 user 0m17.670s 00:19:36.470 sys 0m0.520s 00:19:36.470 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:36.470 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:36.470 ************************************ 00:19:36.470 END TEST nvmf_vfio_user_nvme_compliance 00:19:36.470 ************************************ 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:36.733 ************************************ 00:19:36.733 START TEST nvmf_vfio_user_fuzz 00:19:36.733 ************************************ 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:36.733 * Looking for test storage... 00:19:36.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:36.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.733 --rc genhtml_branch_coverage=1 00:19:36.733 --rc genhtml_function_coverage=1 00:19:36.733 --rc genhtml_legend=1 00:19:36.733 --rc geninfo_all_blocks=1 00:19:36.733 --rc geninfo_unexecuted_blocks=1 00:19:36.733 00:19:36.733 ' 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:36.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.733 --rc genhtml_branch_coverage=1 00:19:36.733 --rc genhtml_function_coverage=1 00:19:36.733 --rc genhtml_legend=1 00:19:36.733 --rc geninfo_all_blocks=1 00:19:36.733 --rc geninfo_unexecuted_blocks=1 00:19:36.733 00:19:36.733 ' 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:36.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.733 --rc genhtml_branch_coverage=1 00:19:36.733 --rc genhtml_function_coverage=1 00:19:36.733 --rc genhtml_legend=1 00:19:36.733 --rc geninfo_all_blocks=1 00:19:36.733 --rc geninfo_unexecuted_blocks=1 00:19:36.733 00:19:36.733 ' 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:36.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.733 --rc genhtml_branch_coverage=1 00:19:36.733 --rc genhtml_function_coverage=1 00:19:36.733 --rc genhtml_legend=1 00:19:36.733 --rc geninfo_all_blocks=1 00:19:36.733 --rc geninfo_unexecuted_blocks=1 00:19:36.733 00:19:36.733 ' 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:36.733 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:19:36.734 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:36.734 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:36.734 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:36.734 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:36.734 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:36.734 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:36.734 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:36.734 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:36.734 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:36.734 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:36.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2356249 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2356249' 00:19:36.996 Process pid: 2356249 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2356249 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2356249 ']' 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:36.996 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:37.939 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:37.939 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:19:37.939 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:19:38.885 07:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:38.885 07:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.885 07:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:38.885 07:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.885 07:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:19:38.885 07:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:38.885 07:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.885 07:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:38.885 malloc0 00:19:38.885 07:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.885 07:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:19:38.885 07:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.885 07:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:38.885 07:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.885 07:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:38.885 07:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.885 07:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:38.885 07:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.885 07:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:38.885 07:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.885 07:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:38.885 07:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.885 07:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:19:38.885 07:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:20:11.021 Fuzzing completed. Shutting down the fuzz application 00:20:11.021 00:20:11.021 Dumping successful admin opcodes: 00:20:11.021 9, 10, 00:20:11.021 Dumping successful io opcodes: 00:20:11.021 0, 00:20:11.021 NS: 0x20000081ef00 I/O qp, Total commands completed: 1411333, total successful commands: 5548, random_seed: 2253373824 00:20:11.021 NS: 0x20000081ef00 admin qp, Total commands completed: 349984, total successful commands: 94, random_seed: 2936588864 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2356249 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2356249 ']' 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 2356249 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2356249 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2356249' 00:20:11.021 killing process with pid 2356249 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 2356249 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 2356249 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:20:11.021 00:20:11.021 real 0m32.834s 00:20:11.021 user 0m37.814s 00:20:11.021 sys 0m24.251s 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:11.021 ************************************ 00:20:11.021 END TEST nvmf_vfio_user_fuzz 00:20:11.021 ************************************ 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:11.021 ************************************ 00:20:11.021 START TEST nvmf_auth_target 00:20:11.021 ************************************ 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:11.021 * Looking for test storage... 00:20:11.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:11.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.021 --rc genhtml_branch_coverage=1 00:20:11.021 --rc genhtml_function_coverage=1 00:20:11.021 --rc genhtml_legend=1 00:20:11.021 --rc geninfo_all_blocks=1 00:20:11.021 --rc geninfo_unexecuted_blocks=1 00:20:11.021 00:20:11.021 ' 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:11.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.021 --rc genhtml_branch_coverage=1 00:20:11.021 --rc genhtml_function_coverage=1 00:20:11.021 --rc genhtml_legend=1 00:20:11.021 --rc geninfo_all_blocks=1 00:20:11.021 --rc geninfo_unexecuted_blocks=1 00:20:11.021 00:20:11.021 ' 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:11.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.021 --rc genhtml_branch_coverage=1 00:20:11.021 --rc genhtml_function_coverage=1 00:20:11.021 --rc genhtml_legend=1 00:20:11.021 --rc geninfo_all_blocks=1 00:20:11.021 --rc geninfo_unexecuted_blocks=1 00:20:11.021 00:20:11.021 ' 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:11.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.021 --rc genhtml_branch_coverage=1 00:20:11.021 --rc genhtml_function_coverage=1 00:20:11.021 --rc genhtml_legend=1 00:20:11.021 --rc geninfo_all_blocks=1 00:20:11.021 --rc geninfo_unexecuted_blocks=1 00:20:11.021 00:20:11.021 ' 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:11.021 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:11.022 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:20:11.022 07:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.609 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:17.609 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:20:17.609 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:17.609 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:17.609 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:17.609 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:17.609 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:17.609 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:20:17.609 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:17.609 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:20:17.609 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:20:17.609 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:17.610 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:17.610 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:17.610 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:17.610 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:17.610 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:17.611 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:17.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:17.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:20:17.611 00:20:17.611 --- 10.0.0.2 ping statistics --- 00:20:17.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.611 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:20:17.611 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:17.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:17.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:20:17.611 00:20:17.611 --- 10.0.0.1 ping statistics --- 00:20:17.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.611 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:20:17.611 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:17.611 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:20:17.611 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:17.611 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:17.611 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:17.611 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:17.611 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:17.611 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:17.611 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:17.611 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:20:17.611 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:17.611 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:17.611 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.611 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2366831 00:20:17.611 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2366831 00:20:17.611 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:20:17.611 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2366831 ']' 00:20:17.611 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.611 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:17.611 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.611 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:17.611 07:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.184 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:18.184 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:18.184 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:18.184 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:18.184 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.184 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:18.184 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2366933 00:20:18.184 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:18.184 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:20:18.184 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:20:18.184 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:18.184 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:18.184 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:18.184 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:20:18.184 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:18.184 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:18.184 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ace36302585418d47219623653ca937b579135a7cf467447 00:20:18.184 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:18.184 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.l2p 00:20:18.184 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ace36302585418d47219623653ca937b579135a7cf467447 0 00:20:18.184 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ace36302585418d47219623653ca937b579135a7cf467447 0 00:20:18.184 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:18.184 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:18.184 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ace36302585418d47219623653ca937b579135a7cf467447 00:20:18.184 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:20:18.184 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.l2p 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.l2p 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.l2p 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b80f7594ebcfb98eb2290eb159973a983d43aa58dcc6e8028d209b6e425f4be9 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.dM0 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b80f7594ebcfb98eb2290eb159973a983d43aa58dcc6e8028d209b6e425f4be9 3 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b80f7594ebcfb98eb2290eb159973a983d43aa58dcc6e8028d209b6e425f4be9 3 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b80f7594ebcfb98eb2290eb159973a983d43aa58dcc6e8028d209b6e425f4be9 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.dM0 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.dM0 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.dM0 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f545b9636b80cf7ff0a8bd9c5660617e 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.9tL 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f545b9636b80cf7ff0a8bd9c5660617e 1 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f545b9636b80cf7ff0a8bd9c5660617e 1 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f545b9636b80cf7ff0a8bd9c5660617e 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.9tL 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.9tL 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.9tL 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=bc9f9e77ec069573ec18ec76115e28aafe5ee896c5b866cd 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.O2P 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key bc9f9e77ec069573ec18ec76115e28aafe5ee896c5b866cd 2 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 bc9f9e77ec069573ec18ec76115e28aafe5ee896c5b866cd 2 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:18.445 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:18.446 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=bc9f9e77ec069573ec18ec76115e28aafe5ee896c5b866cd 00:20:18.446 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:20:18.446 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:18.446 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.O2P 00:20:18.446 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.O2P 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.O2P 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5a0e5c99772aa7096c1dbf0aa6cd9eca7ce3d6f72a418385 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.xh2 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5a0e5c99772aa7096c1dbf0aa6cd9eca7ce3d6f72a418385 2 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5a0e5c99772aa7096c1dbf0aa6cd9eca7ce3d6f72a418385 2 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5a0e5c99772aa7096c1dbf0aa6cd9eca7ce3d6f72a418385 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.xh2 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.xh2 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.xh2 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f9625348080e19138ea5af3144a615c7 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Qfy 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f9625348080e19138ea5af3144a615c7 1 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f9625348080e19138ea5af3144a615c7 1 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f9625348080e19138ea5af3144a615c7 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Qfy 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Qfy 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Qfy 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4452287de0051be2ee0acb24007fad6ee46ad8e8ac295699051905b1dbbbcfef 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.hj6 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4452287de0051be2ee0acb24007fad6ee46ad8e8ac295699051905b1dbbbcfef 3 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4452287de0051be2ee0acb24007fad6ee46ad8e8ac295699051905b1dbbbcfef 3 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4452287de0051be2ee0acb24007fad6ee46ad8e8ac295699051905b1dbbbcfef 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.hj6 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.hj6 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.hj6 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2366831 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2366831 ']' 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:18.706 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.967 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:18.967 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:18.967 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2366933 /var/tmp/host.sock 00:20:18.967 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2366933 ']' 00:20:18.967 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:20:18.967 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:18.967 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:18.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:18.967 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:18.967 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.227 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:19.227 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:19.227 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:20:19.227 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.227 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.227 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.227 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:19.227 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.l2p 00:20:19.227 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.227 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.227 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.227 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.l2p 00:20:19.227 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.l2p 00:20:19.487 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.dM0 ]] 00:20:19.487 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dM0 00:20:19.487 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.487 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.487 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.487 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dM0 00:20:19.487 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dM0 00:20:19.747 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:19.747 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.9tL 00:20:19.747 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.747 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.747 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.747 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.9tL 00:20:19.747 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.9tL 00:20:19.747 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.O2P ]] 00:20:19.747 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.O2P 00:20:19.747 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.747 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.747 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.747 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.O2P 00:20:19.747 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.O2P 00:20:20.006 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:20.006 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.xh2 00:20:20.006 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.006 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.006 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.006 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.xh2 00:20:20.006 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.xh2 00:20:20.265 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Qfy ]] 00:20:20.265 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Qfy 00:20:20.265 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.265 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.265 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.265 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Qfy 00:20:20.265 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Qfy 00:20:20.526 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:20.526 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.hj6 00:20:20.526 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.526 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.526 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.526 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.hj6 00:20:20.526 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.hj6 00:20:20.785 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:20:20.785 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:20.785 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:20.785 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.785 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:20.785 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:20.785 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:20:20.785 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.785 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:20.785 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:20.785 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:20.785 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.785 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.785 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.785 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.785 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.785 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.785 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.785 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.046 00:20:21.046 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.046 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.046 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.306 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.306 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.306 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.306 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.306 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.306 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.306 { 00:20:21.306 "cntlid": 1, 00:20:21.306 "qid": 0, 00:20:21.306 "state": "enabled", 00:20:21.306 "thread": "nvmf_tgt_poll_group_000", 00:20:21.306 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:21.306 "listen_address": { 00:20:21.306 "trtype": "TCP", 00:20:21.306 "adrfam": "IPv4", 00:20:21.306 "traddr": "10.0.0.2", 00:20:21.306 "trsvcid": "4420" 00:20:21.306 }, 00:20:21.306 "peer_address": { 00:20:21.306 "trtype": "TCP", 00:20:21.306 "adrfam": "IPv4", 00:20:21.306 "traddr": "10.0.0.1", 00:20:21.306 "trsvcid": "34914" 00:20:21.306 }, 00:20:21.306 "auth": { 00:20:21.306 "state": "completed", 00:20:21.306 "digest": "sha256", 00:20:21.306 "dhgroup": "null" 00:20:21.306 } 00:20:21.306 } 00:20:21.306 ]' 00:20:21.306 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.306 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:21.306 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.566 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:21.566 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.566 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.566 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.566 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.828 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWNlMzYzMDI1ODU0MThkNDcyMTk2MjM2NTNjYTkzN2I1NzkxMzVhN2NmNDY3NDQ3xIh/KQ==: --dhchap-ctrl-secret DHHC-1:03:YjgwZjc1OTRlYmNmYjk4ZWIyMjkwZWIxNTk5NzNhOTgzZDQzYWE1OGRjYzZlODAyOGQyMDliNmU0MjVmNGJlOfMt/Mk=: 00:20:21.828 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YWNlMzYzMDI1ODU0MThkNDcyMTk2MjM2NTNjYTkzN2I1NzkxMzVhN2NmNDY3NDQ3xIh/KQ==: --dhchap-ctrl-secret DHHC-1:03:YjgwZjc1OTRlYmNmYjk4ZWIyMjkwZWIxNTk5NzNhOTgzZDQzYWE1OGRjYzZlODAyOGQyMDliNmU0MjVmNGJlOfMt/Mk=: 00:20:22.398 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.398 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:22.398 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.398 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.398 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.398 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.398 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:22.398 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:22.659 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:20:22.659 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.659 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:22.659 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:22.659 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:22.659 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.659 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.659 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.659 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.659 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.659 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.659 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.659 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.659 00:20:22.919 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.919 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.919 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.919 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.919 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.919 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.919 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.919 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.919 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.919 { 00:20:22.919 "cntlid": 3, 00:20:22.919 "qid": 0, 00:20:22.919 "state": "enabled", 00:20:22.919 "thread": "nvmf_tgt_poll_group_000", 00:20:22.919 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:22.919 "listen_address": { 00:20:22.919 "trtype": "TCP", 00:20:22.919 "adrfam": "IPv4", 00:20:22.919 "traddr": "10.0.0.2", 00:20:22.919 "trsvcid": "4420" 00:20:22.919 }, 00:20:22.919 "peer_address": { 00:20:22.919 "trtype": "TCP", 00:20:22.919 "adrfam": "IPv4", 00:20:22.919 "traddr": "10.0.0.1", 00:20:22.919 "trsvcid": "34944" 00:20:22.919 }, 00:20:22.919 "auth": { 00:20:22.919 "state": "completed", 00:20:22.919 "digest": "sha256", 00:20:22.919 "dhgroup": "null" 00:20:22.919 } 00:20:22.919 } 00:20:22.919 ]' 00:20:22.919 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.180 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:23.180 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.180 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:23.180 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.180 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.180 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.180 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.441 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: --dhchap-ctrl-secret DHHC-1:02:YmM5ZjllNzdlYzA2OTU3M2VjMThlYzc2MTE1ZTI4YWFmZTVlZTg5NmM1Yjg2NmNk0/oG9A==: 00:20:23.441 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: --dhchap-ctrl-secret DHHC-1:02:YmM5ZjllNzdlYzA2OTU3M2VjMThlYzc2MTE1ZTI4YWFmZTVlZTg5NmM1Yjg2NmNk0/oG9A==: 00:20:24.011 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.011 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:24.011 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.011 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.011 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.011 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.011 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:24.011 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:24.271 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:20:24.271 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.271 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:24.271 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:24.271 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:24.271 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.271 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.271 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.271 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.271 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.271 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.271 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.271 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.532 00:20:24.532 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.532 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.532 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.532 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.532 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.532 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.532 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.532 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.532 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.532 { 00:20:24.532 "cntlid": 5, 00:20:24.532 "qid": 0, 00:20:24.532 "state": "enabled", 00:20:24.532 "thread": "nvmf_tgt_poll_group_000", 00:20:24.532 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:24.532 "listen_address": { 00:20:24.532 "trtype": "TCP", 00:20:24.532 "adrfam": "IPv4", 00:20:24.532 "traddr": "10.0.0.2", 00:20:24.532 "trsvcid": "4420" 00:20:24.532 }, 00:20:24.532 "peer_address": { 00:20:24.532 "trtype": "TCP", 00:20:24.532 "adrfam": "IPv4", 00:20:24.532 "traddr": "10.0.0.1", 00:20:24.532 "trsvcid": "34704" 00:20:24.532 }, 00:20:24.532 "auth": { 00:20:24.532 "state": "completed", 00:20:24.532 "digest": "sha256", 00:20:24.532 "dhgroup": "null" 00:20:24.532 } 00:20:24.532 } 00:20:24.532 ]' 00:20:24.532 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.793 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:24.793 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.793 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:24.793 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.793 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.793 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.793 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.052 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: --dhchap-ctrl-secret DHHC-1:01:Zjk2MjUzNDgwODBlMTkxMzhlYTVhZjMxNDRhNjE1YzdLP6Mb: 00:20:25.052 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: --dhchap-ctrl-secret DHHC-1:01:Zjk2MjUzNDgwODBlMTkxMzhlYTVhZjMxNDRhNjE1YzdLP6Mb: 00:20:25.623 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.623 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:25.623 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.623 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.623 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.623 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.623 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:25.623 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:25.882 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:20:25.882 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.882 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:25.882 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:25.882 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:25.882 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.883 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:25.883 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.883 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.883 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.883 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:25.883 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:25.883 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:26.143 00:20:26.143 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.143 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.143 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.404 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.404 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.404 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.404 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.404 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.404 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.404 { 00:20:26.404 "cntlid": 7, 00:20:26.404 "qid": 0, 00:20:26.404 "state": "enabled", 00:20:26.404 "thread": "nvmf_tgt_poll_group_000", 00:20:26.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:26.404 "listen_address": { 00:20:26.404 "trtype": "TCP", 00:20:26.404 "adrfam": "IPv4", 00:20:26.404 "traddr": "10.0.0.2", 00:20:26.404 "trsvcid": "4420" 00:20:26.404 }, 00:20:26.404 "peer_address": { 00:20:26.404 "trtype": "TCP", 00:20:26.404 "adrfam": "IPv4", 00:20:26.404 "traddr": "10.0.0.1", 00:20:26.404 "trsvcid": "34734" 00:20:26.404 }, 00:20:26.404 "auth": { 00:20:26.404 "state": "completed", 00:20:26.404 "digest": "sha256", 00:20:26.404 "dhgroup": "null" 00:20:26.404 } 00:20:26.404 } 00:20:26.404 ]' 00:20:26.404 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.404 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:26.404 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.404 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:26.404 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.404 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.404 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.404 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.665 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:20:26.665 07:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:20:27.235 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.235 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:27.235 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.235 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.235 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.235 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:27.235 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.235 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:27.235 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:27.495 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:20:27.495 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.495 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:27.495 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:27.495 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:27.495 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.495 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.495 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.495 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.495 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.495 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.495 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.495 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.755 00:20:27.755 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.755 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.755 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.755 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.755 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.755 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.755 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.014 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.014 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.014 { 00:20:28.014 "cntlid": 9, 00:20:28.014 "qid": 0, 00:20:28.014 "state": "enabled", 00:20:28.014 "thread": "nvmf_tgt_poll_group_000", 00:20:28.014 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:28.014 "listen_address": { 00:20:28.014 "trtype": "TCP", 00:20:28.014 "adrfam": "IPv4", 00:20:28.014 "traddr": "10.0.0.2", 00:20:28.014 "trsvcid": "4420" 00:20:28.014 }, 00:20:28.014 "peer_address": { 00:20:28.014 "trtype": "TCP", 00:20:28.014 "adrfam": "IPv4", 00:20:28.014 "traddr": "10.0.0.1", 00:20:28.014 "trsvcid": "34752" 00:20:28.014 }, 00:20:28.014 "auth": { 00:20:28.014 "state": "completed", 00:20:28.014 "digest": "sha256", 00:20:28.014 "dhgroup": "ffdhe2048" 00:20:28.014 } 00:20:28.014 } 00:20:28.014 ]' 00:20:28.014 07:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.014 07:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:28.014 07:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.014 07:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:28.014 07:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.014 07:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.014 07:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.014 07:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.274 07:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWNlMzYzMDI1ODU0MThkNDcyMTk2MjM2NTNjYTkzN2I1NzkxMzVhN2NmNDY3NDQ3xIh/KQ==: --dhchap-ctrl-secret DHHC-1:03:YjgwZjc1OTRlYmNmYjk4ZWIyMjkwZWIxNTk5NzNhOTgzZDQzYWE1OGRjYzZlODAyOGQyMDliNmU0MjVmNGJlOfMt/Mk=: 00:20:28.274 07:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YWNlMzYzMDI1ODU0MThkNDcyMTk2MjM2NTNjYTkzN2I1NzkxMzVhN2NmNDY3NDQ3xIh/KQ==: --dhchap-ctrl-secret DHHC-1:03:YjgwZjc1OTRlYmNmYjk4ZWIyMjkwZWIxNTk5NzNhOTgzZDQzYWE1OGRjYzZlODAyOGQyMDliNmU0MjVmNGJlOfMt/Mk=: 00:20:28.844 07:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.844 07:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:28.844 07:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.844 07:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.844 07:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.844 07:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.844 07:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:28.845 07:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:29.105 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:20:29.105 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.105 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:29.105 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:29.105 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:29.105 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.105 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.105 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.105 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.105 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.105 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.105 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.105 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.366 00:20:29.366 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.366 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.366 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.628 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.628 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.628 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.628 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.628 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.628 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.628 { 00:20:29.628 "cntlid": 11, 00:20:29.628 "qid": 0, 00:20:29.628 "state": "enabled", 00:20:29.628 "thread": "nvmf_tgt_poll_group_000", 00:20:29.628 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:29.628 "listen_address": { 00:20:29.628 "trtype": "TCP", 00:20:29.628 "adrfam": "IPv4", 00:20:29.628 "traddr": "10.0.0.2", 00:20:29.628 "trsvcid": "4420" 00:20:29.628 }, 00:20:29.628 "peer_address": { 00:20:29.628 "trtype": "TCP", 00:20:29.628 "adrfam": "IPv4", 00:20:29.628 "traddr": "10.0.0.1", 00:20:29.628 "trsvcid": "34776" 00:20:29.628 }, 00:20:29.628 "auth": { 00:20:29.628 "state": "completed", 00:20:29.628 "digest": "sha256", 00:20:29.628 "dhgroup": "ffdhe2048" 00:20:29.628 } 00:20:29.628 } 00:20:29.628 ]' 00:20:29.628 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.628 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:29.628 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.628 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:29.628 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.628 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.628 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.628 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.889 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: --dhchap-ctrl-secret DHHC-1:02:YmM5ZjllNzdlYzA2OTU3M2VjMThlYzc2MTE1ZTI4YWFmZTVlZTg5NmM1Yjg2NmNk0/oG9A==: 00:20:29.889 07:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: --dhchap-ctrl-secret DHHC-1:02:YmM5ZjllNzdlYzA2OTU3M2VjMThlYzc2MTE1ZTI4YWFmZTVlZTg5NmM1Yjg2NmNk0/oG9A==: 00:20:30.461 07:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.461 07:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:30.461 07:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.461 07:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.461 07:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.461 07:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.461 07:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:30.461 07:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:30.721 07:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:20:30.721 07:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.721 07:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:30.721 07:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:30.721 07:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:30.721 07:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.721 07:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.721 07:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.721 07:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.721 07:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.721 07:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.721 07:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.721 07:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.037 00:20:31.037 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.037 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.037 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.037 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.037 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.037 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.037 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.366 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.366 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.366 { 00:20:31.366 "cntlid": 13, 00:20:31.366 "qid": 0, 00:20:31.366 "state": "enabled", 00:20:31.366 "thread": "nvmf_tgt_poll_group_000", 00:20:31.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:31.366 "listen_address": { 00:20:31.366 "trtype": "TCP", 00:20:31.366 "adrfam": "IPv4", 00:20:31.366 "traddr": "10.0.0.2", 00:20:31.366 "trsvcid": "4420" 00:20:31.366 }, 00:20:31.366 "peer_address": { 00:20:31.366 "trtype": "TCP", 00:20:31.366 "adrfam": "IPv4", 00:20:31.366 "traddr": "10.0.0.1", 00:20:31.366 "trsvcid": "34816" 00:20:31.366 }, 00:20:31.366 "auth": { 00:20:31.366 "state": "completed", 00:20:31.366 "digest": "sha256", 00:20:31.366 "dhgroup": "ffdhe2048" 00:20:31.366 } 00:20:31.366 } 00:20:31.366 ]' 00:20:31.366 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.366 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:31.366 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.366 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:31.366 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.366 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.366 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.366 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.634 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: --dhchap-ctrl-secret DHHC-1:01:Zjk2MjUzNDgwODBlMTkxMzhlYTVhZjMxNDRhNjE1YzdLP6Mb: 00:20:31.634 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: --dhchap-ctrl-secret DHHC-1:01:Zjk2MjUzNDgwODBlMTkxMzhlYTVhZjMxNDRhNjE1YzdLP6Mb: 00:20:32.206 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.206 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:32.206 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.206 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.206 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.206 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:32.206 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:32.206 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:32.206 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:20:32.206 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.206 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:32.206 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:32.206 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:32.206 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.206 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:32.206 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.206 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.206 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.206 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:32.206 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:32.206 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:32.467 00:20:32.467 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.467 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.467 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.728 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.728 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.728 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.728 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.728 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.728 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.728 { 00:20:32.728 "cntlid": 15, 00:20:32.728 "qid": 0, 00:20:32.728 "state": "enabled", 00:20:32.728 "thread": "nvmf_tgt_poll_group_000", 00:20:32.728 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:32.728 "listen_address": { 00:20:32.728 "trtype": "TCP", 00:20:32.728 "adrfam": "IPv4", 00:20:32.728 "traddr": "10.0.0.2", 00:20:32.728 "trsvcid": "4420" 00:20:32.728 }, 00:20:32.728 "peer_address": { 00:20:32.728 "trtype": "TCP", 00:20:32.728 "adrfam": "IPv4", 00:20:32.728 "traddr": "10.0.0.1", 00:20:32.728 "trsvcid": "34840" 00:20:32.728 }, 00:20:32.728 "auth": { 00:20:32.728 "state": "completed", 00:20:32.728 "digest": "sha256", 00:20:32.728 "dhgroup": "ffdhe2048" 00:20:32.728 } 00:20:32.728 } 00:20:32.728 ]' 00:20:32.728 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.728 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:32.728 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.728 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:32.728 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.988 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.988 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.988 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.988 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:20:32.988 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:20:33.928 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.929 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:33.929 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.929 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.929 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.929 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:33.929 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.929 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:33.929 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:33.929 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:20:33.929 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.929 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:33.929 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:33.929 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:33.929 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.929 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.929 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.929 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.929 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.929 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.929 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.929 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.189 00:20:34.189 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.189 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.189 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.450 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.450 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.450 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.450 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.450 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.450 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.450 { 00:20:34.450 "cntlid": 17, 00:20:34.450 "qid": 0, 00:20:34.450 "state": "enabled", 00:20:34.450 "thread": "nvmf_tgt_poll_group_000", 00:20:34.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:34.450 "listen_address": { 00:20:34.450 "trtype": "TCP", 00:20:34.450 "adrfam": "IPv4", 00:20:34.450 "traddr": "10.0.0.2", 00:20:34.450 "trsvcid": "4420" 00:20:34.450 }, 00:20:34.450 "peer_address": { 00:20:34.450 "trtype": "TCP", 00:20:34.450 "adrfam": "IPv4", 00:20:34.450 "traddr": "10.0.0.1", 00:20:34.450 "trsvcid": "43362" 00:20:34.450 }, 00:20:34.450 "auth": { 00:20:34.450 "state": "completed", 00:20:34.450 "digest": "sha256", 00:20:34.450 "dhgroup": "ffdhe3072" 00:20:34.450 } 00:20:34.450 } 00:20:34.450 ]' 00:20:34.450 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.450 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:34.450 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.450 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:34.450 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.450 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.450 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.450 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.710 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWNlMzYzMDI1ODU0MThkNDcyMTk2MjM2NTNjYTkzN2I1NzkxMzVhN2NmNDY3NDQ3xIh/KQ==: --dhchap-ctrl-secret DHHC-1:03:YjgwZjc1OTRlYmNmYjk4ZWIyMjkwZWIxNTk5NzNhOTgzZDQzYWE1OGRjYzZlODAyOGQyMDliNmU0MjVmNGJlOfMt/Mk=: 00:20:34.710 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YWNlMzYzMDI1ODU0MThkNDcyMTk2MjM2NTNjYTkzN2I1NzkxMzVhN2NmNDY3NDQ3xIh/KQ==: --dhchap-ctrl-secret DHHC-1:03:YjgwZjc1OTRlYmNmYjk4ZWIyMjkwZWIxNTk5NzNhOTgzZDQzYWE1OGRjYzZlODAyOGQyMDliNmU0MjVmNGJlOfMt/Mk=: 00:20:35.281 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.281 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:35.281 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.281 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.281 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.281 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.281 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:35.281 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:35.541 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:20:35.541 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.541 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:35.541 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:35.541 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:35.541 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.541 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.541 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.541 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.541 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.541 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.541 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.541 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.802 00:20:35.802 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.802 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.802 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.062 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.062 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.062 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.062 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.062 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.062 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:36.062 { 00:20:36.062 "cntlid": 19, 00:20:36.062 "qid": 0, 00:20:36.062 "state": "enabled", 00:20:36.062 "thread": "nvmf_tgt_poll_group_000", 00:20:36.062 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:36.062 "listen_address": { 00:20:36.062 "trtype": "TCP", 00:20:36.062 "adrfam": "IPv4", 00:20:36.062 "traddr": "10.0.0.2", 00:20:36.062 "trsvcid": "4420" 00:20:36.062 }, 00:20:36.062 "peer_address": { 00:20:36.062 "trtype": "TCP", 00:20:36.062 "adrfam": "IPv4", 00:20:36.062 "traddr": "10.0.0.1", 00:20:36.062 "trsvcid": "43396" 00:20:36.062 }, 00:20:36.062 "auth": { 00:20:36.062 "state": "completed", 00:20:36.062 "digest": "sha256", 00:20:36.062 "dhgroup": "ffdhe3072" 00:20:36.062 } 00:20:36.062 } 00:20:36.062 ]' 00:20:36.062 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:36.062 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:36.062 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:36.062 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:36.062 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.062 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.062 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.062 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.322 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: --dhchap-ctrl-secret DHHC-1:02:YmM5ZjllNzdlYzA2OTU3M2VjMThlYzc2MTE1ZTI4YWFmZTVlZTg5NmM1Yjg2NmNk0/oG9A==: 00:20:36.322 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: --dhchap-ctrl-secret DHHC-1:02:YmM5ZjllNzdlYzA2OTU3M2VjMThlYzc2MTE1ZTI4YWFmZTVlZTg5NmM1Yjg2NmNk0/oG9A==: 00:20:36.893 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.893 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:36.893 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.893 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.893 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.893 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.893 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:36.893 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:37.153 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:20:37.153 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.153 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:37.153 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:37.153 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:37.153 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.153 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.153 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.153 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.153 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.153 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.153 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.153 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.414 00:20:37.414 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.414 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.414 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.674 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.674 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.674 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.674 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.674 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.674 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.674 { 00:20:37.674 "cntlid": 21, 00:20:37.674 "qid": 0, 00:20:37.674 "state": "enabled", 00:20:37.674 "thread": "nvmf_tgt_poll_group_000", 00:20:37.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:37.674 "listen_address": { 00:20:37.674 "trtype": "TCP", 00:20:37.674 "adrfam": "IPv4", 00:20:37.674 "traddr": "10.0.0.2", 00:20:37.674 "trsvcid": "4420" 00:20:37.674 }, 00:20:37.674 "peer_address": { 00:20:37.674 "trtype": "TCP", 00:20:37.674 "adrfam": "IPv4", 00:20:37.674 "traddr": "10.0.0.1", 00:20:37.674 "trsvcid": "43420" 00:20:37.674 }, 00:20:37.674 "auth": { 00:20:37.674 "state": "completed", 00:20:37.674 "digest": "sha256", 00:20:37.674 "dhgroup": "ffdhe3072" 00:20:37.674 } 00:20:37.674 } 00:20:37.674 ]' 00:20:37.675 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.675 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:37.675 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.675 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:37.675 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.675 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.675 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.675 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.935 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: --dhchap-ctrl-secret DHHC-1:01:Zjk2MjUzNDgwODBlMTkxMzhlYTVhZjMxNDRhNjE1YzdLP6Mb: 00:20:37.935 07:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: --dhchap-ctrl-secret DHHC-1:01:Zjk2MjUzNDgwODBlMTkxMzhlYTVhZjMxNDRhNjE1YzdLP6Mb: 00:20:38.505 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.505 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:38.505 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.505 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.505 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.505 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.505 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:38.505 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:38.766 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:20:38.766 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.766 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:38.766 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:38.766 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:38.766 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.766 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:38.766 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.766 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.766 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.766 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:38.766 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:38.766 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:39.028 00:20:39.028 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.028 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.028 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.289 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.289 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.289 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.289 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.289 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.289 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.289 { 00:20:39.289 "cntlid": 23, 00:20:39.289 "qid": 0, 00:20:39.289 "state": "enabled", 00:20:39.289 "thread": "nvmf_tgt_poll_group_000", 00:20:39.289 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:39.289 "listen_address": { 00:20:39.289 "trtype": "TCP", 00:20:39.289 "adrfam": "IPv4", 00:20:39.289 "traddr": "10.0.0.2", 00:20:39.289 "trsvcid": "4420" 00:20:39.289 }, 00:20:39.289 "peer_address": { 00:20:39.289 "trtype": "TCP", 00:20:39.289 "adrfam": "IPv4", 00:20:39.289 "traddr": "10.0.0.1", 00:20:39.289 "trsvcid": "43444" 00:20:39.289 }, 00:20:39.289 "auth": { 00:20:39.289 "state": "completed", 00:20:39.289 "digest": "sha256", 00:20:39.289 "dhgroup": "ffdhe3072" 00:20:39.289 } 00:20:39.289 } 00:20:39.289 ]' 00:20:39.289 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.289 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:39.289 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.289 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:39.289 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.289 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.289 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.289 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.549 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:20:39.549 07:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:20:40.119 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.120 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:40.120 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.120 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.120 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.120 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:40.120 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.120 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:40.120 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:40.380 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:20:40.380 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.380 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:40.380 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:40.380 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:40.380 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.380 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.380 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.380 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.380 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.380 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.380 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.380 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.642 00:20:40.642 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.642 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.642 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.904 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.904 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.904 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.904 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.904 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.904 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.904 { 00:20:40.904 "cntlid": 25, 00:20:40.904 "qid": 0, 00:20:40.904 "state": "enabled", 00:20:40.904 "thread": "nvmf_tgt_poll_group_000", 00:20:40.904 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:40.904 "listen_address": { 00:20:40.904 "trtype": "TCP", 00:20:40.904 "adrfam": "IPv4", 00:20:40.904 "traddr": "10.0.0.2", 00:20:40.904 "trsvcid": "4420" 00:20:40.904 }, 00:20:40.904 "peer_address": { 00:20:40.904 "trtype": "TCP", 00:20:40.904 "adrfam": "IPv4", 00:20:40.904 "traddr": "10.0.0.1", 00:20:40.904 "trsvcid": "43466" 00:20:40.904 }, 00:20:40.904 "auth": { 00:20:40.904 "state": "completed", 00:20:40.904 "digest": "sha256", 00:20:40.904 "dhgroup": "ffdhe4096" 00:20:40.904 } 00:20:40.904 } 00:20:40.904 ]' 00:20:40.904 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.904 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:40.904 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.904 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:40.904 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:40.904 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.904 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.904 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.164 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWNlMzYzMDI1ODU0MThkNDcyMTk2MjM2NTNjYTkzN2I1NzkxMzVhN2NmNDY3NDQ3xIh/KQ==: --dhchap-ctrl-secret DHHC-1:03:YjgwZjc1OTRlYmNmYjk4ZWIyMjkwZWIxNTk5NzNhOTgzZDQzYWE1OGRjYzZlODAyOGQyMDliNmU0MjVmNGJlOfMt/Mk=: 00:20:41.164 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YWNlMzYzMDI1ODU0MThkNDcyMTk2MjM2NTNjYTkzN2I1NzkxMzVhN2NmNDY3NDQ3xIh/KQ==: --dhchap-ctrl-secret DHHC-1:03:YjgwZjc1OTRlYmNmYjk4ZWIyMjkwZWIxNTk5NzNhOTgzZDQzYWE1OGRjYzZlODAyOGQyMDliNmU0MjVmNGJlOfMt/Mk=: 00:20:41.734 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.734 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:41.734 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.734 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.735 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.735 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.735 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:41.735 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:41.995 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:20:41.995 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.995 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:41.995 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:41.995 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:41.995 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.995 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.995 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.995 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.995 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.995 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.995 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.995 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.255 00:20:42.255 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.255 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.255 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.515 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.515 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.515 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.515 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.515 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.515 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.515 { 00:20:42.515 "cntlid": 27, 00:20:42.515 "qid": 0, 00:20:42.515 "state": "enabled", 00:20:42.515 "thread": "nvmf_tgt_poll_group_000", 00:20:42.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:42.515 "listen_address": { 00:20:42.515 "trtype": "TCP", 00:20:42.515 "adrfam": "IPv4", 00:20:42.515 "traddr": "10.0.0.2", 00:20:42.515 "trsvcid": "4420" 00:20:42.515 }, 00:20:42.515 "peer_address": { 00:20:42.515 "trtype": "TCP", 00:20:42.515 "adrfam": "IPv4", 00:20:42.515 "traddr": "10.0.0.1", 00:20:42.515 "trsvcid": "43486" 00:20:42.515 }, 00:20:42.515 "auth": { 00:20:42.515 "state": "completed", 00:20:42.515 "digest": "sha256", 00:20:42.516 "dhgroup": "ffdhe4096" 00:20:42.516 } 00:20:42.516 } 00:20:42.516 ]' 00:20:42.516 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.516 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:42.516 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.516 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:42.516 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.516 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.516 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.516 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.777 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: --dhchap-ctrl-secret DHHC-1:02:YmM5ZjllNzdlYzA2OTU3M2VjMThlYzc2MTE1ZTI4YWFmZTVlZTg5NmM1Yjg2NmNk0/oG9A==: 00:20:42.777 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: --dhchap-ctrl-secret DHHC-1:02:YmM5ZjllNzdlYzA2OTU3M2VjMThlYzc2MTE1ZTI4YWFmZTVlZTg5NmM1Yjg2NmNk0/oG9A==: 00:20:43.346 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.346 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:43.346 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.346 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.347 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.347 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.347 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:43.347 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:43.606 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:20:43.606 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.606 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:43.606 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:43.606 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:43.606 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.606 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.606 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.606 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.606 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.606 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.606 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.606 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.866 00:20:43.866 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.866 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.866 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.125 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.125 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.125 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.125 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.125 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.125 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.125 { 00:20:44.125 "cntlid": 29, 00:20:44.125 "qid": 0, 00:20:44.125 "state": "enabled", 00:20:44.125 "thread": "nvmf_tgt_poll_group_000", 00:20:44.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:44.125 "listen_address": { 00:20:44.125 "trtype": "TCP", 00:20:44.125 "adrfam": "IPv4", 00:20:44.125 "traddr": "10.0.0.2", 00:20:44.125 "trsvcid": "4420" 00:20:44.125 }, 00:20:44.125 "peer_address": { 00:20:44.125 "trtype": "TCP", 00:20:44.125 "adrfam": "IPv4", 00:20:44.125 "traddr": "10.0.0.1", 00:20:44.125 "trsvcid": "47734" 00:20:44.125 }, 00:20:44.125 "auth": { 00:20:44.125 "state": "completed", 00:20:44.125 "digest": "sha256", 00:20:44.125 "dhgroup": "ffdhe4096" 00:20:44.125 } 00:20:44.125 } 00:20:44.125 ]' 00:20:44.125 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.125 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:44.125 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.125 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:44.125 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.125 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.125 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.125 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.385 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: --dhchap-ctrl-secret DHHC-1:01:Zjk2MjUzNDgwODBlMTkxMzhlYTVhZjMxNDRhNjE1YzdLP6Mb: 00:20:44.385 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: --dhchap-ctrl-secret DHHC-1:01:Zjk2MjUzNDgwODBlMTkxMzhlYTVhZjMxNDRhNjE1YzdLP6Mb: 00:20:44.955 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.955 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:44.955 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.955 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.955 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.955 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:44.955 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:44.955 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:45.215 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:45.215 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.215 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:45.215 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:45.215 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:45.215 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.215 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:45.215 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.215 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.215 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.215 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:45.215 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:45.215 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:45.475 00:20:45.475 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.475 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.475 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.735 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.735 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.735 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.735 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.735 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.735 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.735 { 00:20:45.735 "cntlid": 31, 00:20:45.735 "qid": 0, 00:20:45.735 "state": "enabled", 00:20:45.735 "thread": "nvmf_tgt_poll_group_000", 00:20:45.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:45.735 "listen_address": { 00:20:45.735 "trtype": "TCP", 00:20:45.735 "adrfam": "IPv4", 00:20:45.735 "traddr": "10.0.0.2", 00:20:45.735 "trsvcid": "4420" 00:20:45.735 }, 00:20:45.735 "peer_address": { 00:20:45.735 "trtype": "TCP", 00:20:45.735 "adrfam": "IPv4", 00:20:45.735 "traddr": "10.0.0.1", 00:20:45.735 "trsvcid": "47756" 00:20:45.735 }, 00:20:45.735 "auth": { 00:20:45.735 "state": "completed", 00:20:45.735 "digest": "sha256", 00:20:45.735 "dhgroup": "ffdhe4096" 00:20:45.735 } 00:20:45.735 } 00:20:45.735 ]' 00:20:45.735 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.735 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:45.735 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.735 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:45.735 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.997 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.997 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.997 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.997 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:20:45.997 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:20:46.567 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.828 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:46.828 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.828 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.828 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.828 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:46.828 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:46.828 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:46.828 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:46.828 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:46.828 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.828 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:46.828 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:46.828 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:46.828 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.828 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.828 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.828 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.828 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.828 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.828 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.828 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.088 00:20:47.349 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.349 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.349 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.349 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.349 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.349 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.349 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.349 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.349 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.349 { 00:20:47.349 "cntlid": 33, 00:20:47.349 "qid": 0, 00:20:47.349 "state": "enabled", 00:20:47.349 "thread": "nvmf_tgt_poll_group_000", 00:20:47.349 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:47.349 "listen_address": { 00:20:47.349 "trtype": "TCP", 00:20:47.349 "adrfam": "IPv4", 00:20:47.349 "traddr": "10.0.0.2", 00:20:47.349 "trsvcid": "4420" 00:20:47.349 }, 00:20:47.349 "peer_address": { 00:20:47.349 "trtype": "TCP", 00:20:47.349 "adrfam": "IPv4", 00:20:47.349 "traddr": "10.0.0.1", 00:20:47.349 "trsvcid": "47796" 00:20:47.349 }, 00:20:47.349 "auth": { 00:20:47.349 "state": "completed", 00:20:47.349 "digest": "sha256", 00:20:47.349 "dhgroup": "ffdhe6144" 00:20:47.349 } 00:20:47.349 } 00:20:47.349 ]' 00:20:47.349 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.610 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:47.610 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.610 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:47.610 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.610 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.610 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.610 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.871 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWNlMzYzMDI1ODU0MThkNDcyMTk2MjM2NTNjYTkzN2I1NzkxMzVhN2NmNDY3NDQ3xIh/KQ==: --dhchap-ctrl-secret DHHC-1:03:YjgwZjc1OTRlYmNmYjk4ZWIyMjkwZWIxNTk5NzNhOTgzZDQzYWE1OGRjYzZlODAyOGQyMDliNmU0MjVmNGJlOfMt/Mk=: 00:20:47.871 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YWNlMzYzMDI1ODU0MThkNDcyMTk2MjM2NTNjYTkzN2I1NzkxMzVhN2NmNDY3NDQ3xIh/KQ==: --dhchap-ctrl-secret DHHC-1:03:YjgwZjc1OTRlYmNmYjk4ZWIyMjkwZWIxNTk5NzNhOTgzZDQzYWE1OGRjYzZlODAyOGQyMDliNmU0MjVmNGJlOfMt/Mk=: 00:20:48.440 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.440 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:48.440 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.440 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.440 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.440 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.440 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:48.441 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:48.701 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:48.701 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.701 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:48.701 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:48.701 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:48.701 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.701 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.701 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.701 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.701 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.701 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.701 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.701 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.961 00:20:48.961 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.961 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.961 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.222 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.222 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.222 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.222 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.222 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.222 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.222 { 00:20:49.222 "cntlid": 35, 00:20:49.222 "qid": 0, 00:20:49.222 "state": "enabled", 00:20:49.222 "thread": "nvmf_tgt_poll_group_000", 00:20:49.222 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:49.222 "listen_address": { 00:20:49.222 "trtype": "TCP", 00:20:49.222 "adrfam": "IPv4", 00:20:49.222 "traddr": "10.0.0.2", 00:20:49.222 "trsvcid": "4420" 00:20:49.222 }, 00:20:49.222 "peer_address": { 00:20:49.222 "trtype": "TCP", 00:20:49.222 "adrfam": "IPv4", 00:20:49.222 "traddr": "10.0.0.1", 00:20:49.222 "trsvcid": "47830" 00:20:49.222 }, 00:20:49.222 "auth": { 00:20:49.222 "state": "completed", 00:20:49.222 "digest": "sha256", 00:20:49.222 "dhgroup": "ffdhe6144" 00:20:49.222 } 00:20:49.222 } 00:20:49.222 ]' 00:20:49.222 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.222 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:49.222 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.222 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:49.222 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.222 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.222 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.222 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.483 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: --dhchap-ctrl-secret DHHC-1:02:YmM5ZjllNzdlYzA2OTU3M2VjMThlYzc2MTE1ZTI4YWFmZTVlZTg5NmM1Yjg2NmNk0/oG9A==: 00:20:49.483 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: --dhchap-ctrl-secret DHHC-1:02:YmM5ZjllNzdlYzA2OTU3M2VjMThlYzc2MTE1ZTI4YWFmZTVlZTg5NmM1Yjg2NmNk0/oG9A==: 00:20:50.054 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.054 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:50.054 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.054 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.315 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.315 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.315 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:50.315 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:50.315 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:50.315 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.315 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:50.315 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:50.315 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:50.315 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.315 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.315 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.315 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.315 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.315 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.315 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.315 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.576 00:20:50.837 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.837 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.837 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.837 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.837 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.837 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.837 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.837 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.837 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.837 { 00:20:50.837 "cntlid": 37, 00:20:50.837 "qid": 0, 00:20:50.837 "state": "enabled", 00:20:50.837 "thread": "nvmf_tgt_poll_group_000", 00:20:50.837 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:50.837 "listen_address": { 00:20:50.837 "trtype": "TCP", 00:20:50.837 "adrfam": "IPv4", 00:20:50.837 "traddr": "10.0.0.2", 00:20:50.837 "trsvcid": "4420" 00:20:50.837 }, 00:20:50.837 "peer_address": { 00:20:50.837 "trtype": "TCP", 00:20:50.837 "adrfam": "IPv4", 00:20:50.837 "traddr": "10.0.0.1", 00:20:50.837 "trsvcid": "47870" 00:20:50.837 }, 00:20:50.837 "auth": { 00:20:50.837 "state": "completed", 00:20:50.837 "digest": "sha256", 00:20:50.837 "dhgroup": "ffdhe6144" 00:20:50.837 } 00:20:50.837 } 00:20:50.837 ]' 00:20:50.837 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.097 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:51.097 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.097 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:51.097 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.097 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.097 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.097 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.097 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: --dhchap-ctrl-secret DHHC-1:01:Zjk2MjUzNDgwODBlMTkxMzhlYTVhZjMxNDRhNjE1YzdLP6Mb: 00:20:51.097 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: --dhchap-ctrl-secret DHHC-1:01:Zjk2MjUzNDgwODBlMTkxMzhlYTVhZjMxNDRhNjE1YzdLP6Mb: 00:20:52.038 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.038 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:52.038 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.038 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.038 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.038 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.038 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:52.038 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:52.038 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:52.038 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.038 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:52.038 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:52.038 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:52.038 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.038 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:52.038 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.038 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.038 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.038 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:52.038 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:52.038 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:52.299 00:20:52.560 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.560 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.560 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.560 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.560 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.561 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.561 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.561 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.561 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.561 { 00:20:52.561 "cntlid": 39, 00:20:52.561 "qid": 0, 00:20:52.561 "state": "enabled", 00:20:52.561 "thread": "nvmf_tgt_poll_group_000", 00:20:52.561 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:52.561 "listen_address": { 00:20:52.561 "trtype": "TCP", 00:20:52.561 "adrfam": "IPv4", 00:20:52.561 "traddr": "10.0.0.2", 00:20:52.561 "trsvcid": "4420" 00:20:52.561 }, 00:20:52.561 "peer_address": { 00:20:52.561 "trtype": "TCP", 00:20:52.561 "adrfam": "IPv4", 00:20:52.561 "traddr": "10.0.0.1", 00:20:52.561 "trsvcid": "47896" 00:20:52.561 }, 00:20:52.561 "auth": { 00:20:52.561 "state": "completed", 00:20:52.561 "digest": "sha256", 00:20:52.561 "dhgroup": "ffdhe6144" 00:20:52.561 } 00:20:52.561 } 00:20:52.561 ]' 00:20:52.561 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.561 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:52.561 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.821 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:52.821 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.821 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.821 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.821 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.081 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:20:53.081 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:20:53.651 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.651 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:53.651 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.652 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.652 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.652 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:53.652 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.652 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:53.652 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:53.912 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:53.912 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.912 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:53.912 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:53.912 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:53.912 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.912 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.912 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.912 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.912 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.912 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.912 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.912 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.172 00:20:54.172 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.172 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.172 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.433 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.433 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.433 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.433 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.433 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.433 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.433 { 00:20:54.433 "cntlid": 41, 00:20:54.433 "qid": 0, 00:20:54.433 "state": "enabled", 00:20:54.433 "thread": "nvmf_tgt_poll_group_000", 00:20:54.433 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:54.433 "listen_address": { 00:20:54.433 "trtype": "TCP", 00:20:54.433 "adrfam": "IPv4", 00:20:54.433 "traddr": "10.0.0.2", 00:20:54.433 "trsvcid": "4420" 00:20:54.433 }, 00:20:54.433 "peer_address": { 00:20:54.433 "trtype": "TCP", 00:20:54.433 "adrfam": "IPv4", 00:20:54.433 "traddr": "10.0.0.1", 00:20:54.433 "trsvcid": "58780" 00:20:54.433 }, 00:20:54.433 "auth": { 00:20:54.433 "state": "completed", 00:20:54.433 "digest": "sha256", 00:20:54.433 "dhgroup": "ffdhe8192" 00:20:54.433 } 00:20:54.433 } 00:20:54.433 ]' 00:20:54.433 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.433 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:54.433 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.694 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:54.695 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.695 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.695 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.695 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.695 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWNlMzYzMDI1ODU0MThkNDcyMTk2MjM2NTNjYTkzN2I1NzkxMzVhN2NmNDY3NDQ3xIh/KQ==: --dhchap-ctrl-secret DHHC-1:03:YjgwZjc1OTRlYmNmYjk4ZWIyMjkwZWIxNTk5NzNhOTgzZDQzYWE1OGRjYzZlODAyOGQyMDliNmU0MjVmNGJlOfMt/Mk=: 00:20:54.695 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YWNlMzYzMDI1ODU0MThkNDcyMTk2MjM2NTNjYTkzN2I1NzkxMzVhN2NmNDY3NDQ3xIh/KQ==: --dhchap-ctrl-secret DHHC-1:03:YjgwZjc1OTRlYmNmYjk4ZWIyMjkwZWIxNTk5NzNhOTgzZDQzYWE1OGRjYzZlODAyOGQyMDliNmU0MjVmNGJlOfMt/Mk=: 00:20:55.636 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.636 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:55.636 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.636 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.636 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.636 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.636 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:55.636 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:55.637 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:55.637 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.637 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:55.637 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:55.637 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:55.637 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.637 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.637 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.637 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.637 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.637 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.637 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.637 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.208 00:20:56.208 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:56.208 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.208 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.208 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.208 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.208 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.208 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.208 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.208 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.208 { 00:20:56.208 "cntlid": 43, 00:20:56.208 "qid": 0, 00:20:56.208 "state": "enabled", 00:20:56.208 "thread": "nvmf_tgt_poll_group_000", 00:20:56.208 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:56.209 "listen_address": { 00:20:56.209 "trtype": "TCP", 00:20:56.209 "adrfam": "IPv4", 00:20:56.209 "traddr": "10.0.0.2", 00:20:56.209 "trsvcid": "4420" 00:20:56.209 }, 00:20:56.209 "peer_address": { 00:20:56.209 "trtype": "TCP", 00:20:56.209 "adrfam": "IPv4", 00:20:56.209 "traddr": "10.0.0.1", 00:20:56.209 "trsvcid": "58798" 00:20:56.209 }, 00:20:56.209 "auth": { 00:20:56.209 "state": "completed", 00:20:56.209 "digest": "sha256", 00:20:56.209 "dhgroup": "ffdhe8192" 00:20:56.209 } 00:20:56.209 } 00:20:56.209 ]' 00:20:56.470 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.470 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:56.470 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.470 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:56.470 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.470 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.470 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.470 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.731 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: --dhchap-ctrl-secret DHHC-1:02:YmM5ZjllNzdlYzA2OTU3M2VjMThlYzc2MTE1ZTI4YWFmZTVlZTg5NmM1Yjg2NmNk0/oG9A==: 00:20:56.731 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: --dhchap-ctrl-secret DHHC-1:02:YmM5ZjllNzdlYzA2OTU3M2VjMThlYzc2MTE1ZTI4YWFmZTVlZTg5NmM1Yjg2NmNk0/oG9A==: 00:20:57.301 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.301 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:57.301 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.301 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.301 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.301 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.302 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:57.302 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:57.561 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:57.562 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.562 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:57.562 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:57.562 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:57.562 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.562 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.562 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.562 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.562 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.562 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.562 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.562 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.134 00:20:58.134 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.134 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.134 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.134 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.134 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.134 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.134 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.134 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.134 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.134 { 00:20:58.134 "cntlid": 45, 00:20:58.134 "qid": 0, 00:20:58.134 "state": "enabled", 00:20:58.134 "thread": "nvmf_tgt_poll_group_000", 00:20:58.134 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:58.134 "listen_address": { 00:20:58.134 "trtype": "TCP", 00:20:58.134 "adrfam": "IPv4", 00:20:58.134 "traddr": "10.0.0.2", 00:20:58.134 "trsvcid": "4420" 00:20:58.134 }, 00:20:58.134 "peer_address": { 00:20:58.134 "trtype": "TCP", 00:20:58.134 "adrfam": "IPv4", 00:20:58.134 "traddr": "10.0.0.1", 00:20:58.134 "trsvcid": "58832" 00:20:58.134 }, 00:20:58.134 "auth": { 00:20:58.134 "state": "completed", 00:20:58.134 "digest": "sha256", 00:20:58.134 "dhgroup": "ffdhe8192" 00:20:58.134 } 00:20:58.134 } 00:20:58.134 ]' 00:20:58.134 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.134 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:58.134 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:58.406 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:58.406 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:58.406 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.406 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.406 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.406 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: --dhchap-ctrl-secret DHHC-1:01:Zjk2MjUzNDgwODBlMTkxMzhlYTVhZjMxNDRhNjE1YzdLP6Mb: 00:20:58.406 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: --dhchap-ctrl-secret DHHC-1:01:Zjk2MjUzNDgwODBlMTkxMzhlYTVhZjMxNDRhNjE1YzdLP6Mb: 00:20:59.347 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.347 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:59.347 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.347 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.347 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.347 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.347 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:59.347 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:59.347 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:59.347 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.347 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:59.347 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:59.347 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:59.347 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.347 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:59.347 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.347 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.347 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.347 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:59.347 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:59.347 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:59.917 00:20:59.918 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.918 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.918 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.918 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.918 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.918 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.918 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.178 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.178 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.178 { 00:21:00.178 "cntlid": 47, 00:21:00.178 "qid": 0, 00:21:00.178 "state": "enabled", 00:21:00.178 "thread": "nvmf_tgt_poll_group_000", 00:21:00.178 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:00.178 "listen_address": { 00:21:00.178 "trtype": "TCP", 00:21:00.178 "adrfam": "IPv4", 00:21:00.178 "traddr": "10.0.0.2", 00:21:00.178 "trsvcid": "4420" 00:21:00.178 }, 00:21:00.178 "peer_address": { 00:21:00.178 "trtype": "TCP", 00:21:00.178 "adrfam": "IPv4", 00:21:00.178 "traddr": "10.0.0.1", 00:21:00.178 "trsvcid": "58866" 00:21:00.178 }, 00:21:00.178 "auth": { 00:21:00.178 "state": "completed", 00:21:00.178 "digest": "sha256", 00:21:00.178 "dhgroup": "ffdhe8192" 00:21:00.178 } 00:21:00.178 } 00:21:00.178 ]' 00:21:00.178 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.178 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:00.178 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.178 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:00.178 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.178 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.178 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.178 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.440 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:21:00.440 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:21:01.010 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.010 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:01.010 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.010 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.010 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.010 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:01.010 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:01.010 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.010 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:01.010 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:01.270 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:21:01.270 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.270 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:01.270 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:01.270 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:01.270 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.270 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.270 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.270 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.270 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.270 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.270 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.271 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.531 00:21:01.531 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.531 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.531 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.531 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.531 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.531 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.531 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.531 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.531 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.531 { 00:21:01.531 "cntlid": 49, 00:21:01.531 "qid": 0, 00:21:01.531 "state": "enabled", 00:21:01.531 "thread": "nvmf_tgt_poll_group_000", 00:21:01.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:01.531 "listen_address": { 00:21:01.531 "trtype": "TCP", 00:21:01.531 "adrfam": "IPv4", 00:21:01.531 "traddr": "10.0.0.2", 00:21:01.531 "trsvcid": "4420" 00:21:01.531 }, 00:21:01.531 "peer_address": { 00:21:01.531 "trtype": "TCP", 00:21:01.531 "adrfam": "IPv4", 00:21:01.531 "traddr": "10.0.0.1", 00:21:01.531 "trsvcid": "58906" 00:21:01.531 }, 00:21:01.531 "auth": { 00:21:01.531 "state": "completed", 00:21:01.531 "digest": "sha384", 00:21:01.531 "dhgroup": "null" 00:21:01.531 } 00:21:01.531 } 00:21:01.531 ]' 00:21:01.531 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.792 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:01.792 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.792 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:01.792 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.792 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.792 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.792 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.052 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWNlMzYzMDI1ODU0MThkNDcyMTk2MjM2NTNjYTkzN2I1NzkxMzVhN2NmNDY3NDQ3xIh/KQ==: --dhchap-ctrl-secret DHHC-1:03:YjgwZjc1OTRlYmNmYjk4ZWIyMjkwZWIxNTk5NzNhOTgzZDQzYWE1OGRjYzZlODAyOGQyMDliNmU0MjVmNGJlOfMt/Mk=: 00:21:02.052 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YWNlMzYzMDI1ODU0MThkNDcyMTk2MjM2NTNjYTkzN2I1NzkxMzVhN2NmNDY3NDQ3xIh/KQ==: --dhchap-ctrl-secret DHHC-1:03:YjgwZjc1OTRlYmNmYjk4ZWIyMjkwZWIxNTk5NzNhOTgzZDQzYWE1OGRjYzZlODAyOGQyMDliNmU0MjVmNGJlOfMt/Mk=: 00:21:02.623 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.624 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:02.624 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.624 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.624 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.624 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.624 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:02.624 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:02.884 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:21:02.884 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.884 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:02.884 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:02.884 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:02.884 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.884 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.884 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.884 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.884 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.884 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.884 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.884 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.884 00:21:03.145 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.145 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.145 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.145 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.145 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.145 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.145 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.145 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.145 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.145 { 00:21:03.145 "cntlid": 51, 00:21:03.145 "qid": 0, 00:21:03.145 "state": "enabled", 00:21:03.145 "thread": "nvmf_tgt_poll_group_000", 00:21:03.145 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:03.145 "listen_address": { 00:21:03.145 "trtype": "TCP", 00:21:03.145 "adrfam": "IPv4", 00:21:03.145 "traddr": "10.0.0.2", 00:21:03.145 "trsvcid": "4420" 00:21:03.145 }, 00:21:03.145 "peer_address": { 00:21:03.145 "trtype": "TCP", 00:21:03.145 "adrfam": "IPv4", 00:21:03.145 "traddr": "10.0.0.1", 00:21:03.145 "trsvcid": "58930" 00:21:03.145 }, 00:21:03.145 "auth": { 00:21:03.145 "state": "completed", 00:21:03.145 "digest": "sha384", 00:21:03.145 "dhgroup": "null" 00:21:03.145 } 00:21:03.145 } 00:21:03.145 ]' 00:21:03.145 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.145 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:03.145 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.405 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:03.405 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.405 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.405 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.405 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.666 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: --dhchap-ctrl-secret DHHC-1:02:YmM5ZjllNzdlYzA2OTU3M2VjMThlYzc2MTE1ZTI4YWFmZTVlZTg5NmM1Yjg2NmNk0/oG9A==: 00:21:03.666 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: --dhchap-ctrl-secret DHHC-1:02:YmM5ZjllNzdlYzA2OTU3M2VjMThlYzc2MTE1ZTI4YWFmZTVlZTg5NmM1Yjg2NmNk0/oG9A==: 00:21:04.238 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.238 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:04.238 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.238 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.238 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.238 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.239 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:04.239 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:04.499 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:21:04.499 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.499 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:04.499 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:04.499 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:04.499 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.499 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.499 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.499 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.499 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.499 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.500 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.500 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.760 00:21:04.760 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.760 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.760 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.760 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.760 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.760 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.760 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.760 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.760 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.760 { 00:21:04.760 "cntlid": 53, 00:21:04.760 "qid": 0, 00:21:04.760 "state": "enabled", 00:21:04.760 "thread": "nvmf_tgt_poll_group_000", 00:21:04.760 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:04.760 "listen_address": { 00:21:04.760 "trtype": "TCP", 00:21:04.760 "adrfam": "IPv4", 00:21:04.760 "traddr": "10.0.0.2", 00:21:04.760 "trsvcid": "4420" 00:21:04.760 }, 00:21:04.760 "peer_address": { 00:21:04.760 "trtype": "TCP", 00:21:04.760 "adrfam": "IPv4", 00:21:04.760 "traddr": "10.0.0.1", 00:21:04.760 "trsvcid": "47032" 00:21:04.760 }, 00:21:04.760 "auth": { 00:21:04.760 "state": "completed", 00:21:04.760 "digest": "sha384", 00:21:04.760 "dhgroup": "null" 00:21:04.760 } 00:21:04.760 } 00:21:04.760 ]' 00:21:04.760 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.021 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:05.021 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.021 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:05.021 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.021 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.021 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.021 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.281 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: --dhchap-ctrl-secret DHHC-1:01:Zjk2MjUzNDgwODBlMTkxMzhlYTVhZjMxNDRhNjE1YzdLP6Mb: 00:21:05.281 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: --dhchap-ctrl-secret DHHC-1:01:Zjk2MjUzNDgwODBlMTkxMzhlYTVhZjMxNDRhNjE1YzdLP6Mb: 00:21:05.850 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.850 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:05.850 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.850 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.850 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.850 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.850 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:05.850 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:06.109 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:21:06.109 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.109 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:06.109 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:06.109 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:06.109 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.109 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:06.109 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.110 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.110 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.110 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:06.110 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:06.110 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:06.371 00:21:06.371 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.371 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.371 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.371 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.371 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.371 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.371 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.371 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.371 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.371 { 00:21:06.371 "cntlid": 55, 00:21:06.371 "qid": 0, 00:21:06.371 "state": "enabled", 00:21:06.371 "thread": "nvmf_tgt_poll_group_000", 00:21:06.371 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:06.371 "listen_address": { 00:21:06.371 "trtype": "TCP", 00:21:06.371 "adrfam": "IPv4", 00:21:06.371 "traddr": "10.0.0.2", 00:21:06.371 "trsvcid": "4420" 00:21:06.371 }, 00:21:06.371 "peer_address": { 00:21:06.371 "trtype": "TCP", 00:21:06.371 "adrfam": "IPv4", 00:21:06.371 "traddr": "10.0.0.1", 00:21:06.371 "trsvcid": "47056" 00:21:06.371 }, 00:21:06.371 "auth": { 00:21:06.371 "state": "completed", 00:21:06.371 "digest": "sha384", 00:21:06.371 "dhgroup": "null" 00:21:06.371 } 00:21:06.371 } 00:21:06.371 ]' 00:21:06.372 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.632 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:06.632 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.632 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:06.632 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.632 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.632 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.632 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.892 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:21:06.892 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:21:07.464 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.464 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:07.464 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.464 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.464 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.464 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:07.464 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.464 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:07.464 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:07.723 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:21:07.723 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.723 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:07.723 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:07.723 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:07.723 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.723 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.723 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.723 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.723 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.723 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.724 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.724 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.984 00:21:07.984 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.984 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.984 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.984 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.984 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.984 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.984 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.984 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.984 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.984 { 00:21:07.984 "cntlid": 57, 00:21:07.984 "qid": 0, 00:21:07.984 "state": "enabled", 00:21:07.984 "thread": "nvmf_tgt_poll_group_000", 00:21:07.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:07.984 "listen_address": { 00:21:07.984 "trtype": "TCP", 00:21:07.984 "adrfam": "IPv4", 00:21:07.984 "traddr": "10.0.0.2", 00:21:07.984 "trsvcid": "4420" 00:21:07.984 }, 00:21:07.984 "peer_address": { 00:21:07.984 "trtype": "TCP", 00:21:07.984 "adrfam": "IPv4", 00:21:07.984 "traddr": "10.0.0.1", 00:21:07.984 "trsvcid": "47084" 00:21:07.984 }, 00:21:07.984 "auth": { 00:21:07.984 "state": "completed", 00:21:07.984 "digest": "sha384", 00:21:07.984 "dhgroup": "ffdhe2048" 00:21:07.984 } 00:21:07.984 } 00:21:07.984 ]' 00:21:07.984 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.243 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:08.243 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.243 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:08.243 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.243 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.243 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.243 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.503 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWNlMzYzMDI1ODU0MThkNDcyMTk2MjM2NTNjYTkzN2I1NzkxMzVhN2NmNDY3NDQ3xIh/KQ==: --dhchap-ctrl-secret DHHC-1:03:YjgwZjc1OTRlYmNmYjk4ZWIyMjkwZWIxNTk5NzNhOTgzZDQzYWE1OGRjYzZlODAyOGQyMDliNmU0MjVmNGJlOfMt/Mk=: 00:21:08.503 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YWNlMzYzMDI1ODU0MThkNDcyMTk2MjM2NTNjYTkzN2I1NzkxMzVhN2NmNDY3NDQ3xIh/KQ==: --dhchap-ctrl-secret DHHC-1:03:YjgwZjc1OTRlYmNmYjk4ZWIyMjkwZWIxNTk5NzNhOTgzZDQzYWE1OGRjYzZlODAyOGQyMDliNmU0MjVmNGJlOfMt/Mk=: 00:21:09.075 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.075 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:09.075 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.075 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.075 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.075 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.075 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:09.075 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:09.342 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:21:09.342 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.342 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:09.342 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:09.342 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:09.342 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.342 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.342 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.342 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.342 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.342 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.342 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.342 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.680 00:21:09.680 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.680 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.680 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.680 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.680 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.680 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.680 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.680 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.680 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.680 { 00:21:09.680 "cntlid": 59, 00:21:09.680 "qid": 0, 00:21:09.680 "state": "enabled", 00:21:09.680 "thread": "nvmf_tgt_poll_group_000", 00:21:09.680 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:09.680 "listen_address": { 00:21:09.680 "trtype": "TCP", 00:21:09.680 "adrfam": "IPv4", 00:21:09.680 "traddr": "10.0.0.2", 00:21:09.680 "trsvcid": "4420" 00:21:09.680 }, 00:21:09.680 "peer_address": { 00:21:09.680 "trtype": "TCP", 00:21:09.680 "adrfam": "IPv4", 00:21:09.680 "traddr": "10.0.0.1", 00:21:09.680 "trsvcid": "47100" 00:21:09.680 }, 00:21:09.680 "auth": { 00:21:09.680 "state": "completed", 00:21:09.680 "digest": "sha384", 00:21:09.680 "dhgroup": "ffdhe2048" 00:21:09.680 } 00:21:09.680 } 00:21:09.680 ]' 00:21:09.680 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.680 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:09.680 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.975 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:09.976 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.976 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.976 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.976 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.976 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: --dhchap-ctrl-secret DHHC-1:02:YmM5ZjllNzdlYzA2OTU3M2VjMThlYzc2MTE1ZTI4YWFmZTVlZTg5NmM1Yjg2NmNk0/oG9A==: 00:21:09.976 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: --dhchap-ctrl-secret DHHC-1:02:YmM5ZjllNzdlYzA2OTU3M2VjMThlYzc2MTE1ZTI4YWFmZTVlZTg5NmM1Yjg2NmNk0/oG9A==: 00:21:10.561 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.561 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:10.561 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.561 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.561 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.561 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.561 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:10.561 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:10.822 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:21:10.822 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.822 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:10.822 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:10.822 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:10.822 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.822 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.822 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.822 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.822 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.822 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.822 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.822 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.085 00:21:11.085 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.085 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.085 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.346 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.346 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.346 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.346 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.346 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.346 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.346 { 00:21:11.346 "cntlid": 61, 00:21:11.346 "qid": 0, 00:21:11.346 "state": "enabled", 00:21:11.346 "thread": "nvmf_tgt_poll_group_000", 00:21:11.346 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:11.346 "listen_address": { 00:21:11.346 "trtype": "TCP", 00:21:11.346 "adrfam": "IPv4", 00:21:11.346 "traddr": "10.0.0.2", 00:21:11.346 "trsvcid": "4420" 00:21:11.346 }, 00:21:11.346 "peer_address": { 00:21:11.346 "trtype": "TCP", 00:21:11.346 "adrfam": "IPv4", 00:21:11.346 "traddr": "10.0.0.1", 00:21:11.346 "trsvcid": "47128" 00:21:11.346 }, 00:21:11.346 "auth": { 00:21:11.346 "state": "completed", 00:21:11.346 "digest": "sha384", 00:21:11.346 "dhgroup": "ffdhe2048" 00:21:11.346 } 00:21:11.346 } 00:21:11.346 ]' 00:21:11.346 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.346 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:11.346 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.346 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:11.346 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.346 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.346 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.346 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.607 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: --dhchap-ctrl-secret DHHC-1:01:Zjk2MjUzNDgwODBlMTkxMzhlYTVhZjMxNDRhNjE1YzdLP6Mb: 00:21:11.607 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: --dhchap-ctrl-secret DHHC-1:01:Zjk2MjUzNDgwODBlMTkxMzhlYTVhZjMxNDRhNjE1YzdLP6Mb: 00:21:12.180 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.443 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:12.443 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.443 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.443 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.443 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.443 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:12.443 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:12.443 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:21:12.443 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.443 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:12.443 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:12.443 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:12.443 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.443 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:12.443 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.443 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.443 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.443 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:12.443 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:12.443 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:12.706 00:21:12.706 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.706 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.706 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.967 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.967 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.967 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.967 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.967 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.967 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.967 { 00:21:12.967 "cntlid": 63, 00:21:12.967 "qid": 0, 00:21:12.967 "state": "enabled", 00:21:12.967 "thread": "nvmf_tgt_poll_group_000", 00:21:12.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:12.967 "listen_address": { 00:21:12.967 "trtype": "TCP", 00:21:12.967 "adrfam": "IPv4", 00:21:12.967 "traddr": "10.0.0.2", 00:21:12.967 "trsvcid": "4420" 00:21:12.967 }, 00:21:12.967 "peer_address": { 00:21:12.967 "trtype": "TCP", 00:21:12.967 "adrfam": "IPv4", 00:21:12.967 "traddr": "10.0.0.1", 00:21:12.967 "trsvcid": "47164" 00:21:12.967 }, 00:21:12.967 "auth": { 00:21:12.967 "state": "completed", 00:21:12.967 "digest": "sha384", 00:21:12.967 "dhgroup": "ffdhe2048" 00:21:12.967 } 00:21:12.967 } 00:21:12.967 ]' 00:21:12.967 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.967 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:12.967 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.967 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:12.967 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.228 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.228 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.228 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.228 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:21:13.228 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:21:13.800 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.060 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:14.060 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.060 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.060 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.060 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:14.060 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.060 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:14.060 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:14.060 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:21:14.060 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.060 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:14.060 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:14.060 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:14.060 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.061 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.061 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.061 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.061 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.061 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.061 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.061 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.321 00:21:14.321 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.321 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.321 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.582 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.582 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.582 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.582 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.582 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.582 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.582 { 00:21:14.582 "cntlid": 65, 00:21:14.582 "qid": 0, 00:21:14.582 "state": "enabled", 00:21:14.582 "thread": "nvmf_tgt_poll_group_000", 00:21:14.582 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:14.582 "listen_address": { 00:21:14.582 "trtype": "TCP", 00:21:14.582 "adrfam": "IPv4", 00:21:14.582 "traddr": "10.0.0.2", 00:21:14.582 "trsvcid": "4420" 00:21:14.582 }, 00:21:14.582 "peer_address": { 00:21:14.582 "trtype": "TCP", 00:21:14.582 "adrfam": "IPv4", 00:21:14.582 "traddr": "10.0.0.1", 00:21:14.582 "trsvcid": "41322" 00:21:14.582 }, 00:21:14.582 "auth": { 00:21:14.582 "state": "completed", 00:21:14.582 "digest": "sha384", 00:21:14.582 "dhgroup": "ffdhe3072" 00:21:14.582 } 00:21:14.582 } 00:21:14.582 ]' 00:21:14.582 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.582 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:14.583 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.583 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:14.583 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.843 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.843 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.843 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.843 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWNlMzYzMDI1ODU0MThkNDcyMTk2MjM2NTNjYTkzN2I1NzkxMzVhN2NmNDY3NDQ3xIh/KQ==: --dhchap-ctrl-secret DHHC-1:03:YjgwZjc1OTRlYmNmYjk4ZWIyMjkwZWIxNTk5NzNhOTgzZDQzYWE1OGRjYzZlODAyOGQyMDliNmU0MjVmNGJlOfMt/Mk=: 00:21:14.843 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YWNlMzYzMDI1ODU0MThkNDcyMTk2MjM2NTNjYTkzN2I1NzkxMzVhN2NmNDY3NDQ3xIh/KQ==: --dhchap-ctrl-secret DHHC-1:03:YjgwZjc1OTRlYmNmYjk4ZWIyMjkwZWIxNTk5NzNhOTgzZDQzYWE1OGRjYzZlODAyOGQyMDliNmU0MjVmNGJlOfMt/Mk=: 00:21:15.784 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.784 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:15.784 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.784 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.784 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.784 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.784 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:15.784 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:15.784 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:21:15.784 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.784 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:15.784 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:15.784 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:15.784 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.784 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.784 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.784 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.784 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.784 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.784 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.784 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.045 00:21:16.045 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.045 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.045 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.305 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.305 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.305 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.305 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.305 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.305 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.305 { 00:21:16.305 "cntlid": 67, 00:21:16.305 "qid": 0, 00:21:16.305 "state": "enabled", 00:21:16.305 "thread": "nvmf_tgt_poll_group_000", 00:21:16.305 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:16.305 "listen_address": { 00:21:16.305 "trtype": "TCP", 00:21:16.305 "adrfam": "IPv4", 00:21:16.305 "traddr": "10.0.0.2", 00:21:16.305 "trsvcid": "4420" 00:21:16.305 }, 00:21:16.305 "peer_address": { 00:21:16.305 "trtype": "TCP", 00:21:16.305 "adrfam": "IPv4", 00:21:16.305 "traddr": "10.0.0.1", 00:21:16.305 "trsvcid": "41332" 00:21:16.305 }, 00:21:16.305 "auth": { 00:21:16.305 "state": "completed", 00:21:16.305 "digest": "sha384", 00:21:16.305 "dhgroup": "ffdhe3072" 00:21:16.305 } 00:21:16.305 } 00:21:16.305 ]' 00:21:16.305 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.305 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:16.305 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.305 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:16.305 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.305 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.305 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.305 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.565 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: --dhchap-ctrl-secret DHHC-1:02:YmM5ZjllNzdlYzA2OTU3M2VjMThlYzc2MTE1ZTI4YWFmZTVlZTg5NmM1Yjg2NmNk0/oG9A==: 00:21:16.565 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: --dhchap-ctrl-secret DHHC-1:02:YmM5ZjllNzdlYzA2OTU3M2VjMThlYzc2MTE1ZTI4YWFmZTVlZTg5NmM1Yjg2NmNk0/oG9A==: 00:21:17.134 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.134 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:17.134 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.134 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.134 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.134 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.134 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:17.134 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:17.395 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:21:17.395 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.395 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:17.395 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:17.395 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:17.395 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.395 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.395 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.395 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.395 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.395 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.395 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.396 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.655 00:21:17.656 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.656 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.656 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.917 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.917 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.917 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.917 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.917 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.917 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.917 { 00:21:17.917 "cntlid": 69, 00:21:17.917 "qid": 0, 00:21:17.917 "state": "enabled", 00:21:17.917 "thread": "nvmf_tgt_poll_group_000", 00:21:17.917 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:17.917 "listen_address": { 00:21:17.917 "trtype": "TCP", 00:21:17.917 "adrfam": "IPv4", 00:21:17.917 "traddr": "10.0.0.2", 00:21:17.917 "trsvcid": "4420" 00:21:17.917 }, 00:21:17.917 "peer_address": { 00:21:17.917 "trtype": "TCP", 00:21:17.917 "adrfam": "IPv4", 00:21:17.917 "traddr": "10.0.0.1", 00:21:17.917 "trsvcid": "41350" 00:21:17.917 }, 00:21:17.917 "auth": { 00:21:17.917 "state": "completed", 00:21:17.917 "digest": "sha384", 00:21:17.917 "dhgroup": "ffdhe3072" 00:21:17.917 } 00:21:17.917 } 00:21:17.917 ]' 00:21:17.917 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.917 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:17.917 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.917 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:17.917 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.917 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.917 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.917 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.177 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: --dhchap-ctrl-secret DHHC-1:01:Zjk2MjUzNDgwODBlMTkxMzhlYTVhZjMxNDRhNjE1YzdLP6Mb: 00:21:18.177 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: --dhchap-ctrl-secret DHHC-1:01:Zjk2MjUzNDgwODBlMTkxMzhlYTVhZjMxNDRhNjE1YzdLP6Mb: 00:21:18.748 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.748 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:18.748 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.748 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.748 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.748 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.748 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:18.748 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:19.040 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:21:19.040 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.040 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:19.040 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:19.040 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:19.040 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.040 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:19.040 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.040 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.040 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.040 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:19.040 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:19.040 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:19.301 00:21:19.301 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.301 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.301 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.563 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.563 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.563 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.563 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.563 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.563 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.563 { 00:21:19.563 "cntlid": 71, 00:21:19.563 "qid": 0, 00:21:19.563 "state": "enabled", 00:21:19.563 "thread": "nvmf_tgt_poll_group_000", 00:21:19.563 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:19.563 "listen_address": { 00:21:19.563 "trtype": "TCP", 00:21:19.563 "adrfam": "IPv4", 00:21:19.563 "traddr": "10.0.0.2", 00:21:19.563 "trsvcid": "4420" 00:21:19.563 }, 00:21:19.563 "peer_address": { 00:21:19.563 "trtype": "TCP", 00:21:19.563 "adrfam": "IPv4", 00:21:19.563 "traddr": "10.0.0.1", 00:21:19.563 "trsvcid": "41374" 00:21:19.563 }, 00:21:19.563 "auth": { 00:21:19.563 "state": "completed", 00:21:19.563 "digest": "sha384", 00:21:19.563 "dhgroup": "ffdhe3072" 00:21:19.563 } 00:21:19.563 } 00:21:19.563 ]' 00:21:19.563 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.563 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:19.563 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.563 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:19.563 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.563 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.563 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.563 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.823 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:21:19.823 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:21:20.764 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.765 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:20.765 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.765 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.765 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.765 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:20.765 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.765 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:20.765 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:20.765 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:21:20.765 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.765 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:20.765 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:20.765 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:20.765 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.765 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.765 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.765 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.765 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.765 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.765 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.765 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.026 00:21:21.026 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.026 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.026 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.286 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.286 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.286 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.286 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.286 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.286 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.286 { 00:21:21.286 "cntlid": 73, 00:21:21.286 "qid": 0, 00:21:21.286 "state": "enabled", 00:21:21.286 "thread": "nvmf_tgt_poll_group_000", 00:21:21.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:21.286 "listen_address": { 00:21:21.286 "trtype": "TCP", 00:21:21.286 "adrfam": "IPv4", 00:21:21.286 "traddr": "10.0.0.2", 00:21:21.286 "trsvcid": "4420" 00:21:21.286 }, 00:21:21.286 "peer_address": { 00:21:21.286 "trtype": "TCP", 00:21:21.286 "adrfam": "IPv4", 00:21:21.286 "traddr": "10.0.0.1", 00:21:21.286 "trsvcid": "41386" 00:21:21.286 }, 00:21:21.286 "auth": { 00:21:21.286 "state": "completed", 00:21:21.286 "digest": "sha384", 00:21:21.286 "dhgroup": "ffdhe4096" 00:21:21.286 } 00:21:21.286 } 00:21:21.286 ]' 00:21:21.286 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.286 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:21.286 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.286 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:21.286 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.286 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.286 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.286 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.547 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWNlMzYzMDI1ODU0MThkNDcyMTk2MjM2NTNjYTkzN2I1NzkxMzVhN2NmNDY3NDQ3xIh/KQ==: --dhchap-ctrl-secret DHHC-1:03:YjgwZjc1OTRlYmNmYjk4ZWIyMjkwZWIxNTk5NzNhOTgzZDQzYWE1OGRjYzZlODAyOGQyMDliNmU0MjVmNGJlOfMt/Mk=: 00:21:21.547 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YWNlMzYzMDI1ODU0MThkNDcyMTk2MjM2NTNjYTkzN2I1NzkxMzVhN2NmNDY3NDQ3xIh/KQ==: --dhchap-ctrl-secret DHHC-1:03:YjgwZjc1OTRlYmNmYjk4ZWIyMjkwZWIxNTk5NzNhOTgzZDQzYWE1OGRjYzZlODAyOGQyMDliNmU0MjVmNGJlOfMt/Mk=: 00:21:22.118 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.118 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:22.118 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.118 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.118 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.118 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.118 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:22.118 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:22.378 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:21:22.378 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.378 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:22.378 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:22.378 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:22.378 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.378 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.378 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.378 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.379 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.379 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.379 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.379 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.639 00:21:22.639 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.639 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.639 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.899 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.899 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.899 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.899 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.899 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.899 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.899 { 00:21:22.899 "cntlid": 75, 00:21:22.899 "qid": 0, 00:21:22.899 "state": "enabled", 00:21:22.899 "thread": "nvmf_tgt_poll_group_000", 00:21:22.899 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:22.899 "listen_address": { 00:21:22.899 "trtype": "TCP", 00:21:22.899 "adrfam": "IPv4", 00:21:22.899 "traddr": "10.0.0.2", 00:21:22.899 "trsvcid": "4420" 00:21:22.899 }, 00:21:22.899 "peer_address": { 00:21:22.899 "trtype": "TCP", 00:21:22.899 "adrfam": "IPv4", 00:21:22.899 "traddr": "10.0.0.1", 00:21:22.899 "trsvcid": "41398" 00:21:22.899 }, 00:21:22.899 "auth": { 00:21:22.899 "state": "completed", 00:21:22.899 "digest": "sha384", 00:21:22.899 "dhgroup": "ffdhe4096" 00:21:22.899 } 00:21:22.899 } 00:21:22.899 ]' 00:21:22.899 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.899 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:22.899 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.899 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:22.899 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.899 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.899 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.899 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.160 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: --dhchap-ctrl-secret DHHC-1:02:YmM5ZjllNzdlYzA2OTU3M2VjMThlYzc2MTE1ZTI4YWFmZTVlZTg5NmM1Yjg2NmNk0/oG9A==: 00:21:23.160 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: --dhchap-ctrl-secret DHHC-1:02:YmM5ZjllNzdlYzA2OTU3M2VjMThlYzc2MTE1ZTI4YWFmZTVlZTg5NmM1Yjg2NmNk0/oG9A==: 00:21:23.729 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.729 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:23.729 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.729 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.729 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.729 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.729 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:23.729 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:23.989 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:21:23.989 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.989 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:23.989 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:23.989 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:23.989 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.989 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.989 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.990 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.990 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.990 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.990 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.990 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.250 00:21:24.250 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.250 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.250 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.510 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.510 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.511 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.511 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.511 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.511 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.511 { 00:21:24.511 "cntlid": 77, 00:21:24.511 "qid": 0, 00:21:24.511 "state": "enabled", 00:21:24.511 "thread": "nvmf_tgt_poll_group_000", 00:21:24.511 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:24.511 "listen_address": { 00:21:24.511 "trtype": "TCP", 00:21:24.511 "adrfam": "IPv4", 00:21:24.511 "traddr": "10.0.0.2", 00:21:24.511 "trsvcid": "4420" 00:21:24.511 }, 00:21:24.511 "peer_address": { 00:21:24.511 "trtype": "TCP", 00:21:24.511 "adrfam": "IPv4", 00:21:24.511 "traddr": "10.0.0.1", 00:21:24.511 "trsvcid": "51458" 00:21:24.511 }, 00:21:24.511 "auth": { 00:21:24.511 "state": "completed", 00:21:24.511 "digest": "sha384", 00:21:24.511 "dhgroup": "ffdhe4096" 00:21:24.511 } 00:21:24.511 } 00:21:24.511 ]' 00:21:24.511 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.511 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:24.511 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.511 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:24.511 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.511 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.511 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.511 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.771 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: --dhchap-ctrl-secret DHHC-1:01:Zjk2MjUzNDgwODBlMTkxMzhlYTVhZjMxNDRhNjE1YzdLP6Mb: 00:21:24.771 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: --dhchap-ctrl-secret DHHC-1:01:Zjk2MjUzNDgwODBlMTkxMzhlYTVhZjMxNDRhNjE1YzdLP6Mb: 00:21:25.711 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.711 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.711 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:25.711 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.711 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.711 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.712 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.712 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:25.712 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:25.712 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:21:25.712 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.712 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:25.712 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:25.712 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:25.712 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.712 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:25.712 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.712 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.712 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.712 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:25.712 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:25.712 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:25.972 00:21:25.972 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.972 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.972 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.234 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.234 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.234 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.234 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.234 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.234 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.234 { 00:21:26.234 "cntlid": 79, 00:21:26.234 "qid": 0, 00:21:26.234 "state": "enabled", 00:21:26.234 "thread": "nvmf_tgt_poll_group_000", 00:21:26.234 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:26.234 "listen_address": { 00:21:26.234 "trtype": "TCP", 00:21:26.234 "adrfam": "IPv4", 00:21:26.234 "traddr": "10.0.0.2", 00:21:26.234 "trsvcid": "4420" 00:21:26.234 }, 00:21:26.234 "peer_address": { 00:21:26.234 "trtype": "TCP", 00:21:26.234 "adrfam": "IPv4", 00:21:26.234 "traddr": "10.0.0.1", 00:21:26.234 "trsvcid": "51474" 00:21:26.234 }, 00:21:26.234 "auth": { 00:21:26.234 "state": "completed", 00:21:26.234 "digest": "sha384", 00:21:26.234 "dhgroup": "ffdhe4096" 00:21:26.234 } 00:21:26.234 } 00:21:26.234 ]' 00:21:26.234 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.234 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:26.234 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.234 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:26.234 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.234 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.234 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.234 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.495 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:21:26.495 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:21:27.067 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.067 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:27.067 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.067 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.067 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.067 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:27.067 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:27.067 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:27.067 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:27.328 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:21:27.328 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:27.328 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:27.328 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:27.328 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:27.328 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.328 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.328 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.328 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.328 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.328 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.328 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.328 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.589 00:21:27.589 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.589 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.589 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.849 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.849 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.849 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.849 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.849 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.849 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.849 { 00:21:27.849 "cntlid": 81, 00:21:27.849 "qid": 0, 00:21:27.849 "state": "enabled", 00:21:27.849 "thread": "nvmf_tgt_poll_group_000", 00:21:27.849 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:27.849 "listen_address": { 00:21:27.849 "trtype": "TCP", 00:21:27.849 "adrfam": "IPv4", 00:21:27.849 "traddr": "10.0.0.2", 00:21:27.849 "trsvcid": "4420" 00:21:27.849 }, 00:21:27.849 "peer_address": { 00:21:27.849 "trtype": "TCP", 00:21:27.849 "adrfam": "IPv4", 00:21:27.849 "traddr": "10.0.0.1", 00:21:27.850 "trsvcid": "51504" 00:21:27.850 }, 00:21:27.850 "auth": { 00:21:27.850 "state": "completed", 00:21:27.850 "digest": "sha384", 00:21:27.850 "dhgroup": "ffdhe6144" 00:21:27.850 } 00:21:27.850 } 00:21:27.850 ]' 00:21:27.850 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.850 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:27.850 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.850 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:27.850 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.110 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.110 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.110 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.110 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWNlMzYzMDI1ODU0MThkNDcyMTk2MjM2NTNjYTkzN2I1NzkxMzVhN2NmNDY3NDQ3xIh/KQ==: --dhchap-ctrl-secret DHHC-1:03:YjgwZjc1OTRlYmNmYjk4ZWIyMjkwZWIxNTk5NzNhOTgzZDQzYWE1OGRjYzZlODAyOGQyMDliNmU0MjVmNGJlOfMt/Mk=: 00:21:28.111 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YWNlMzYzMDI1ODU0MThkNDcyMTk2MjM2NTNjYTkzN2I1NzkxMzVhN2NmNDY3NDQ3xIh/KQ==: --dhchap-ctrl-secret DHHC-1:03:YjgwZjc1OTRlYmNmYjk4ZWIyMjkwZWIxNTk5NzNhOTgzZDQzYWE1OGRjYzZlODAyOGQyMDliNmU0MjVmNGJlOfMt/Mk=: 00:21:29.052 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.052 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:29.052 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.052 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.053 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.053 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:29.053 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:29.053 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:29.053 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:21:29.053 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:29.053 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:29.053 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:29.053 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:29.053 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.053 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.053 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.053 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.053 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.053 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.053 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.053 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.314 00:21:29.314 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.314 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.314 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.575 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.575 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.575 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.575 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.575 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.575 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.575 { 00:21:29.575 "cntlid": 83, 00:21:29.575 "qid": 0, 00:21:29.575 "state": "enabled", 00:21:29.575 "thread": "nvmf_tgt_poll_group_000", 00:21:29.575 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:29.575 "listen_address": { 00:21:29.575 "trtype": "TCP", 00:21:29.575 "adrfam": "IPv4", 00:21:29.575 "traddr": "10.0.0.2", 00:21:29.575 "trsvcid": "4420" 00:21:29.575 }, 00:21:29.575 "peer_address": { 00:21:29.575 "trtype": "TCP", 00:21:29.575 "adrfam": "IPv4", 00:21:29.575 "traddr": "10.0.0.1", 00:21:29.575 "trsvcid": "51532" 00:21:29.575 }, 00:21:29.575 "auth": { 00:21:29.575 "state": "completed", 00:21:29.575 "digest": "sha384", 00:21:29.575 "dhgroup": "ffdhe6144" 00:21:29.575 } 00:21:29.575 } 00:21:29.575 ]' 00:21:29.575 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.575 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:29.575 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.575 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:29.575 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.575 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.575 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.575 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.835 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: --dhchap-ctrl-secret DHHC-1:02:YmM5ZjllNzdlYzA2OTU3M2VjMThlYzc2MTE1ZTI4YWFmZTVlZTg5NmM1Yjg2NmNk0/oG9A==: 00:21:29.835 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: --dhchap-ctrl-secret DHHC-1:02:YmM5ZjllNzdlYzA2OTU3M2VjMThlYzc2MTE1ZTI4YWFmZTVlZTg5NmM1Yjg2NmNk0/oG9A==: 00:21:30.406 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.667 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:30.667 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.667 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.667 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.667 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.667 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:30.667 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:30.667 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:21:30.667 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.667 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:30.667 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:30.667 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:30.667 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.667 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:30.667 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.667 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.667 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.667 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:30.667 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:30.667 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.237 00:21:31.237 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.237 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.237 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.237 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.237 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.237 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.237 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.237 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.237 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.237 { 00:21:31.237 "cntlid": 85, 00:21:31.237 "qid": 0, 00:21:31.237 "state": "enabled", 00:21:31.237 "thread": "nvmf_tgt_poll_group_000", 00:21:31.237 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:31.237 "listen_address": { 00:21:31.237 "trtype": "TCP", 00:21:31.237 "adrfam": "IPv4", 00:21:31.237 "traddr": "10.0.0.2", 00:21:31.237 "trsvcid": "4420" 00:21:31.237 }, 00:21:31.237 "peer_address": { 00:21:31.237 "trtype": "TCP", 00:21:31.237 "adrfam": "IPv4", 00:21:31.238 "traddr": "10.0.0.1", 00:21:31.238 "trsvcid": "51558" 00:21:31.238 }, 00:21:31.238 "auth": { 00:21:31.238 "state": "completed", 00:21:31.238 "digest": "sha384", 00:21:31.238 "dhgroup": "ffdhe6144" 00:21:31.238 } 00:21:31.238 } 00:21:31.238 ]' 00:21:31.238 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.238 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:31.238 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.238 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:31.238 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.498 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.498 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.498 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.498 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: --dhchap-ctrl-secret DHHC-1:01:Zjk2MjUzNDgwODBlMTkxMzhlYTVhZjMxNDRhNjE1YzdLP6Mb: 00:21:31.498 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: --dhchap-ctrl-secret DHHC-1:01:Zjk2MjUzNDgwODBlMTkxMzhlYTVhZjMxNDRhNjE1YzdLP6Mb: 00:21:32.438 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.438 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:32.438 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.438 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.438 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.438 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.438 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:32.438 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:32.438 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:21:32.438 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.438 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:32.438 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:32.438 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:32.438 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.438 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:32.438 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.438 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.438 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.438 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:32.438 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:32.438 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:32.698 00:21:32.698 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:32.698 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:32.698 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.958 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.958 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.958 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.958 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.958 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.958 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:32.958 { 00:21:32.958 "cntlid": 87, 00:21:32.958 "qid": 0, 00:21:32.958 "state": "enabled", 00:21:32.958 "thread": "nvmf_tgt_poll_group_000", 00:21:32.958 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:32.958 "listen_address": { 00:21:32.958 "trtype": "TCP", 00:21:32.958 "adrfam": "IPv4", 00:21:32.958 "traddr": "10.0.0.2", 00:21:32.958 "trsvcid": "4420" 00:21:32.958 }, 00:21:32.958 "peer_address": { 00:21:32.958 "trtype": "TCP", 00:21:32.958 "adrfam": "IPv4", 00:21:32.958 "traddr": "10.0.0.1", 00:21:32.958 "trsvcid": "51592" 00:21:32.958 }, 00:21:32.958 "auth": { 00:21:32.958 "state": "completed", 00:21:32.958 "digest": "sha384", 00:21:32.958 "dhgroup": "ffdhe6144" 00:21:32.958 } 00:21:32.958 } 00:21:32.958 ]' 00:21:32.958 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:32.958 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:32.958 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:32.958 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:32.958 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.219 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.219 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.219 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.219 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:21:33.219 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:21:34.161 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.161 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:34.161 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.161 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.161 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.161 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:34.161 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.161 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:34.161 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:34.161 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:21:34.161 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.161 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:34.161 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:34.161 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:34.161 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.161 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.161 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.161 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.161 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.161 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.161 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.161 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.733 00:21:34.733 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.733 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.733 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.733 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.733 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.733 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.733 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.733 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.733 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.733 { 00:21:34.733 "cntlid": 89, 00:21:34.733 "qid": 0, 00:21:34.733 "state": "enabled", 00:21:34.733 "thread": "nvmf_tgt_poll_group_000", 00:21:34.733 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:34.733 "listen_address": { 00:21:34.733 "trtype": "TCP", 00:21:34.733 "adrfam": "IPv4", 00:21:34.733 "traddr": "10.0.0.2", 00:21:34.733 "trsvcid": "4420" 00:21:34.733 }, 00:21:34.733 "peer_address": { 00:21:34.733 "trtype": "TCP", 00:21:34.733 "adrfam": "IPv4", 00:21:34.733 "traddr": "10.0.0.1", 00:21:34.733 "trsvcid": "53186" 00:21:34.733 }, 00:21:34.733 "auth": { 00:21:34.733 "state": "completed", 00:21:34.733 "digest": "sha384", 00:21:34.733 "dhgroup": "ffdhe8192" 00:21:34.733 } 00:21:34.733 } 00:21:34.733 ]' 00:21:34.733 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.993 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:34.993 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:34.993 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:34.993 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:34.993 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.993 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.993 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.253 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWNlMzYzMDI1ODU0MThkNDcyMTk2MjM2NTNjYTkzN2I1NzkxMzVhN2NmNDY3NDQ3xIh/KQ==: --dhchap-ctrl-secret DHHC-1:03:YjgwZjc1OTRlYmNmYjk4ZWIyMjkwZWIxNTk5NzNhOTgzZDQzYWE1OGRjYzZlODAyOGQyMDliNmU0MjVmNGJlOfMt/Mk=: 00:21:35.254 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YWNlMzYzMDI1ODU0MThkNDcyMTk2MjM2NTNjYTkzN2I1NzkxMzVhN2NmNDY3NDQ3xIh/KQ==: --dhchap-ctrl-secret DHHC-1:03:YjgwZjc1OTRlYmNmYjk4ZWIyMjkwZWIxNTk5NzNhOTgzZDQzYWE1OGRjYzZlODAyOGQyMDliNmU0MjVmNGJlOfMt/Mk=: 00:21:35.825 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.825 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:35.825 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.825 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.825 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.825 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:35.825 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:35.825 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:36.087 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:36.087 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.087 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:36.087 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:36.087 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:36.087 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.087 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.087 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.087 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.087 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.087 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.087 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.087 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.660 00:21:36.660 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.660 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.660 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.660 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.660 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.660 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.660 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.660 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.660 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.660 { 00:21:36.660 "cntlid": 91, 00:21:36.660 "qid": 0, 00:21:36.660 "state": "enabled", 00:21:36.660 "thread": "nvmf_tgt_poll_group_000", 00:21:36.660 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:36.660 "listen_address": { 00:21:36.660 "trtype": "TCP", 00:21:36.660 "adrfam": "IPv4", 00:21:36.660 "traddr": "10.0.0.2", 00:21:36.660 "trsvcid": "4420" 00:21:36.660 }, 00:21:36.660 "peer_address": { 00:21:36.660 "trtype": "TCP", 00:21:36.660 "adrfam": "IPv4", 00:21:36.660 "traddr": "10.0.0.1", 00:21:36.660 "trsvcid": "53216" 00:21:36.660 }, 00:21:36.660 "auth": { 00:21:36.660 "state": "completed", 00:21:36.660 "digest": "sha384", 00:21:36.660 "dhgroup": "ffdhe8192" 00:21:36.660 } 00:21:36.660 } 00:21:36.660 ]' 00:21:36.660 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.660 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:36.660 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.920 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:36.920 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.920 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.920 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.920 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.920 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: --dhchap-ctrl-secret DHHC-1:02:YmM5ZjllNzdlYzA2OTU3M2VjMThlYzc2MTE1ZTI4YWFmZTVlZTg5NmM1Yjg2NmNk0/oG9A==: 00:21:36.920 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: --dhchap-ctrl-secret DHHC-1:02:YmM5ZjllNzdlYzA2OTU3M2VjMThlYzc2MTE1ZTI4YWFmZTVlZTg5NmM1Yjg2NmNk0/oG9A==: 00:21:37.863 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.863 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:37.863 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.863 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.863 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.863 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:37.863 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:37.863 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:37.863 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:37.863 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.863 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:37.863 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:37.863 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:37.863 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.863 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.863 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.863 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.863 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.863 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.863 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.863 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.438 00:21:38.438 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.438 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.438 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.438 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.438 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.438 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.438 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.438 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.438 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.438 { 00:21:38.438 "cntlid": 93, 00:21:38.438 "qid": 0, 00:21:38.438 "state": "enabled", 00:21:38.438 "thread": "nvmf_tgt_poll_group_000", 00:21:38.438 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:38.438 "listen_address": { 00:21:38.438 "trtype": "TCP", 00:21:38.438 "adrfam": "IPv4", 00:21:38.438 "traddr": "10.0.0.2", 00:21:38.438 "trsvcid": "4420" 00:21:38.438 }, 00:21:38.438 "peer_address": { 00:21:38.438 "trtype": "TCP", 00:21:38.438 "adrfam": "IPv4", 00:21:38.438 "traddr": "10.0.0.1", 00:21:38.438 "trsvcid": "53246" 00:21:38.438 }, 00:21:38.438 "auth": { 00:21:38.438 "state": "completed", 00:21:38.438 "digest": "sha384", 00:21:38.438 "dhgroup": "ffdhe8192" 00:21:38.438 } 00:21:38.438 } 00:21:38.438 ]' 00:21:38.438 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.700 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:38.700 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.700 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:38.700 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.700 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.700 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.700 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.960 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: --dhchap-ctrl-secret DHHC-1:01:Zjk2MjUzNDgwODBlMTkxMzhlYTVhZjMxNDRhNjE1YzdLP6Mb: 00:21:38.960 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: --dhchap-ctrl-secret DHHC-1:01:Zjk2MjUzNDgwODBlMTkxMzhlYTVhZjMxNDRhNjE1YzdLP6Mb: 00:21:39.531 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.532 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:39.532 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.532 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.532 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.532 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.532 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:39.532 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:39.792 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:39.792 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:39.792 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:39.792 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:39.792 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:39.792 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.792 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:39.792 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.792 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.792 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.793 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:39.793 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:39.793 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:40.053 00:21:40.313 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.313 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.313 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.313 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.313 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.313 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.313 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.313 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.313 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.313 { 00:21:40.313 "cntlid": 95, 00:21:40.313 "qid": 0, 00:21:40.313 "state": "enabled", 00:21:40.313 "thread": "nvmf_tgt_poll_group_000", 00:21:40.313 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:40.313 "listen_address": { 00:21:40.313 "trtype": "TCP", 00:21:40.313 "adrfam": "IPv4", 00:21:40.313 "traddr": "10.0.0.2", 00:21:40.313 "trsvcid": "4420" 00:21:40.313 }, 00:21:40.313 "peer_address": { 00:21:40.313 "trtype": "TCP", 00:21:40.313 "adrfam": "IPv4", 00:21:40.313 "traddr": "10.0.0.1", 00:21:40.313 "trsvcid": "53278" 00:21:40.313 }, 00:21:40.313 "auth": { 00:21:40.313 "state": "completed", 00:21:40.313 "digest": "sha384", 00:21:40.313 "dhgroup": "ffdhe8192" 00:21:40.313 } 00:21:40.313 } 00:21:40.313 ]' 00:21:40.313 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:40.574 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:40.574 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.574 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:40.574 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:40.574 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.574 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.574 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.834 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:21:40.834 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:21:41.406 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.406 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:41.406 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.406 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.406 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.406 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:41.406 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:41.406 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:41.406 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:41.406 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:41.667 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:41.667 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.667 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:41.667 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:41.667 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:41.667 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.667 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.667 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.667 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.667 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.667 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.667 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.667 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.667 00:21:41.667 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.667 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.667 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.927 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.927 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.927 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.927 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.927 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.927 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.927 { 00:21:41.927 "cntlid": 97, 00:21:41.927 "qid": 0, 00:21:41.927 "state": "enabled", 00:21:41.928 "thread": "nvmf_tgt_poll_group_000", 00:21:41.928 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:41.928 "listen_address": { 00:21:41.928 "trtype": "TCP", 00:21:41.928 "adrfam": "IPv4", 00:21:41.928 "traddr": "10.0.0.2", 00:21:41.928 "trsvcid": "4420" 00:21:41.928 }, 00:21:41.928 "peer_address": { 00:21:41.928 "trtype": "TCP", 00:21:41.928 "adrfam": "IPv4", 00:21:41.928 "traddr": "10.0.0.1", 00:21:41.928 "trsvcid": "53298" 00:21:41.928 }, 00:21:41.928 "auth": { 00:21:41.928 "state": "completed", 00:21:41.928 "digest": "sha512", 00:21:41.928 "dhgroup": "null" 00:21:41.928 } 00:21:41.928 } 00:21:41.928 ]' 00:21:41.928 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.928 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.928 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.187 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:42.187 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.187 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.187 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.187 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.187 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWNlMzYzMDI1ODU0MThkNDcyMTk2MjM2NTNjYTkzN2I1NzkxMzVhN2NmNDY3NDQ3xIh/KQ==: --dhchap-ctrl-secret DHHC-1:03:YjgwZjc1OTRlYmNmYjk4ZWIyMjkwZWIxNTk5NzNhOTgzZDQzYWE1OGRjYzZlODAyOGQyMDliNmU0MjVmNGJlOfMt/Mk=: 00:21:42.187 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YWNlMzYzMDI1ODU0MThkNDcyMTk2MjM2NTNjYTkzN2I1NzkxMzVhN2NmNDY3NDQ3xIh/KQ==: --dhchap-ctrl-secret DHHC-1:03:YjgwZjc1OTRlYmNmYjk4ZWIyMjkwZWIxNTk5NzNhOTgzZDQzYWE1OGRjYzZlODAyOGQyMDliNmU0MjVmNGJlOfMt/Mk=: 00:21:43.127 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.128 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:43.128 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.128 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.128 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.128 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.128 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:43.128 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:43.128 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:43.128 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.128 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:43.128 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:43.128 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:43.128 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.128 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.128 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.128 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.128 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.128 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.128 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.128 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.389 00:21:43.389 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:43.389 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.389 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:43.648 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.648 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.648 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.648 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.648 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.648 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:43.648 { 00:21:43.648 "cntlid": 99, 00:21:43.648 "qid": 0, 00:21:43.648 "state": "enabled", 00:21:43.648 "thread": "nvmf_tgt_poll_group_000", 00:21:43.648 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:43.648 "listen_address": { 00:21:43.648 "trtype": "TCP", 00:21:43.648 "adrfam": "IPv4", 00:21:43.648 "traddr": "10.0.0.2", 00:21:43.648 "trsvcid": "4420" 00:21:43.648 }, 00:21:43.648 "peer_address": { 00:21:43.648 "trtype": "TCP", 00:21:43.648 "adrfam": "IPv4", 00:21:43.648 "traddr": "10.0.0.1", 00:21:43.648 "trsvcid": "53328" 00:21:43.648 }, 00:21:43.648 "auth": { 00:21:43.648 "state": "completed", 00:21:43.648 "digest": "sha512", 00:21:43.648 "dhgroup": "null" 00:21:43.648 } 00:21:43.648 } 00:21:43.648 ]' 00:21:43.648 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:43.648 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.648 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:43.648 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:43.648 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:43.648 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.649 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.649 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.907 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: --dhchap-ctrl-secret DHHC-1:02:YmM5ZjllNzdlYzA2OTU3M2VjMThlYzc2MTE1ZTI4YWFmZTVlZTg5NmM1Yjg2NmNk0/oG9A==: 00:21:43.908 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: --dhchap-ctrl-secret DHHC-1:02:YmM5ZjllNzdlYzA2OTU3M2VjMThlYzc2MTE1ZTI4YWFmZTVlZTg5NmM1Yjg2NmNk0/oG9A==: 00:21:44.478 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.478 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:44.478 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.478 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.478 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.478 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:44.478 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:44.478 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:44.738 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:44.738 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.738 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:44.738 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:44.738 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:44.738 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.738 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.738 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.738 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.738 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.738 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.738 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.739 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.000 00:21:45.000 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:45.000 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.000 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.261 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.261 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.261 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.261 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.261 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.261 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.261 { 00:21:45.261 "cntlid": 101, 00:21:45.261 "qid": 0, 00:21:45.261 "state": "enabled", 00:21:45.261 "thread": "nvmf_tgt_poll_group_000", 00:21:45.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:45.261 "listen_address": { 00:21:45.261 "trtype": "TCP", 00:21:45.261 "adrfam": "IPv4", 00:21:45.261 "traddr": "10.0.0.2", 00:21:45.261 "trsvcid": "4420" 00:21:45.261 }, 00:21:45.261 "peer_address": { 00:21:45.261 "trtype": "TCP", 00:21:45.261 "adrfam": "IPv4", 00:21:45.261 "traddr": "10.0.0.1", 00:21:45.261 "trsvcid": "58518" 00:21:45.261 }, 00:21:45.261 "auth": { 00:21:45.261 "state": "completed", 00:21:45.261 "digest": "sha512", 00:21:45.261 "dhgroup": "null" 00:21:45.261 } 00:21:45.261 } 00:21:45.261 ]' 00:21:45.261 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.261 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.261 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.261 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:45.261 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.261 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.261 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.261 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.521 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: --dhchap-ctrl-secret DHHC-1:01:Zjk2MjUzNDgwODBlMTkxMzhlYTVhZjMxNDRhNjE1YzdLP6Mb: 00:21:45.521 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: --dhchap-ctrl-secret DHHC-1:01:Zjk2MjUzNDgwODBlMTkxMzhlYTVhZjMxNDRhNjE1YzdLP6Mb: 00:21:46.092 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.092 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:46.092 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.092 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.092 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.092 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.092 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:46.092 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:46.353 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:46.353 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.353 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:46.353 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:46.353 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:46.353 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.353 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:46.353 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.353 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.353 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.353 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:46.353 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:46.353 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:46.612 00:21:46.612 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.612 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.612 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.873 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.873 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.873 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.873 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.873 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.873 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:46.873 { 00:21:46.873 "cntlid": 103, 00:21:46.873 "qid": 0, 00:21:46.873 "state": "enabled", 00:21:46.873 "thread": "nvmf_tgt_poll_group_000", 00:21:46.873 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:46.873 "listen_address": { 00:21:46.873 "trtype": "TCP", 00:21:46.873 "adrfam": "IPv4", 00:21:46.873 "traddr": "10.0.0.2", 00:21:46.873 "trsvcid": "4420" 00:21:46.873 }, 00:21:46.873 "peer_address": { 00:21:46.873 "trtype": "TCP", 00:21:46.873 "adrfam": "IPv4", 00:21:46.873 "traddr": "10.0.0.1", 00:21:46.873 "trsvcid": "58534" 00:21:46.873 }, 00:21:46.873 "auth": { 00:21:46.873 "state": "completed", 00:21:46.873 "digest": "sha512", 00:21:46.873 "dhgroup": "null" 00:21:46.873 } 00:21:46.873 } 00:21:46.873 ]' 00:21:46.873 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:46.873 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:46.873 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:46.873 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:46.873 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:46.873 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.873 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.873 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.133 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:21:47.133 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:21:47.709 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.709 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:47.709 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.709 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.709 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.709 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:47.709 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:47.709 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:47.709 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:47.989 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:47.989 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:47.989 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:47.989 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:47.989 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:47.989 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.989 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.989 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.989 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.989 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.989 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.989 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.989 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.378 00:21:48.378 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:48.378 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:48.378 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.378 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.378 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.378 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.378 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.378 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.378 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:48.378 { 00:21:48.378 "cntlid": 105, 00:21:48.378 "qid": 0, 00:21:48.378 "state": "enabled", 00:21:48.378 "thread": "nvmf_tgt_poll_group_000", 00:21:48.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:48.378 "listen_address": { 00:21:48.378 "trtype": "TCP", 00:21:48.378 "adrfam": "IPv4", 00:21:48.378 "traddr": "10.0.0.2", 00:21:48.378 "trsvcid": "4420" 00:21:48.378 }, 00:21:48.378 "peer_address": { 00:21:48.378 "trtype": "TCP", 00:21:48.378 "adrfam": "IPv4", 00:21:48.378 "traddr": "10.0.0.1", 00:21:48.378 "trsvcid": "58560" 00:21:48.378 }, 00:21:48.378 "auth": { 00:21:48.378 "state": "completed", 00:21:48.378 "digest": "sha512", 00:21:48.378 "dhgroup": "ffdhe2048" 00:21:48.378 } 00:21:48.378 } 00:21:48.378 ]' 00:21:48.378 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:48.378 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:48.378 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:48.658 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:48.658 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:48.658 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.658 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.658 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.658 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWNlMzYzMDI1ODU0MThkNDcyMTk2MjM2NTNjYTkzN2I1NzkxMzVhN2NmNDY3NDQ3xIh/KQ==: --dhchap-ctrl-secret DHHC-1:03:YjgwZjc1OTRlYmNmYjk4ZWIyMjkwZWIxNTk5NzNhOTgzZDQzYWE1OGRjYzZlODAyOGQyMDliNmU0MjVmNGJlOfMt/Mk=: 00:21:48.658 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YWNlMzYzMDI1ODU0MThkNDcyMTk2MjM2NTNjYTkzN2I1NzkxMzVhN2NmNDY3NDQ3xIh/KQ==: --dhchap-ctrl-secret DHHC-1:03:YjgwZjc1OTRlYmNmYjk4ZWIyMjkwZWIxNTk5NzNhOTgzZDQzYWE1OGRjYzZlODAyOGQyMDliNmU0MjVmNGJlOfMt/Mk=: 00:21:49.599 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.600 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:49.600 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.600 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.600 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.600 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:49.600 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:49.600 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:49.600 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:49.600 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:49.600 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:49.600 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:49.600 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:49.600 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.600 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.600 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.600 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.600 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.600 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.600 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.600 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.860 00:21:49.860 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.860 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.860 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.120 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.120 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.120 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.120 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.120 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.120 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.120 { 00:21:50.120 "cntlid": 107, 00:21:50.120 "qid": 0, 00:21:50.120 "state": "enabled", 00:21:50.120 "thread": "nvmf_tgt_poll_group_000", 00:21:50.120 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:50.120 "listen_address": { 00:21:50.120 "trtype": "TCP", 00:21:50.120 "adrfam": "IPv4", 00:21:50.120 "traddr": "10.0.0.2", 00:21:50.120 "trsvcid": "4420" 00:21:50.120 }, 00:21:50.120 "peer_address": { 00:21:50.120 "trtype": "TCP", 00:21:50.120 "adrfam": "IPv4", 00:21:50.120 "traddr": "10.0.0.1", 00:21:50.120 "trsvcid": "58594" 00:21:50.120 }, 00:21:50.120 "auth": { 00:21:50.120 "state": "completed", 00:21:50.120 "digest": "sha512", 00:21:50.120 "dhgroup": "ffdhe2048" 00:21:50.120 } 00:21:50.120 } 00:21:50.120 ]' 00:21:50.120 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:50.120 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.120 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:50.120 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:50.120 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:50.120 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.120 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.120 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.381 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: --dhchap-ctrl-secret DHHC-1:02:YmM5ZjllNzdlYzA2OTU3M2VjMThlYzc2MTE1ZTI4YWFmZTVlZTg5NmM1Yjg2NmNk0/oG9A==: 00:21:50.381 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: --dhchap-ctrl-secret DHHC-1:02:YmM5ZjllNzdlYzA2OTU3M2VjMThlYzc2MTE1ZTI4YWFmZTVlZTg5NmM1Yjg2NmNk0/oG9A==: 00:21:50.951 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.951 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:50.951 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.951 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.951 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.951 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:50.951 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:50.951 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:51.212 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:51.212 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:51.212 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:51.212 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:51.212 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:51.212 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.212 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.212 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.212 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.212 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.212 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.212 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.212 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.473 00:21:51.473 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:51.473 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:51.473 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.734 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.734 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.734 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.734 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.734 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.734 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:51.734 { 00:21:51.734 "cntlid": 109, 00:21:51.734 "qid": 0, 00:21:51.734 "state": "enabled", 00:21:51.734 "thread": "nvmf_tgt_poll_group_000", 00:21:51.734 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:51.734 "listen_address": { 00:21:51.734 "trtype": "TCP", 00:21:51.734 "adrfam": "IPv4", 00:21:51.734 "traddr": "10.0.0.2", 00:21:51.734 "trsvcid": "4420" 00:21:51.734 }, 00:21:51.734 "peer_address": { 00:21:51.734 "trtype": "TCP", 00:21:51.734 "adrfam": "IPv4", 00:21:51.734 "traddr": "10.0.0.1", 00:21:51.734 "trsvcid": "58606" 00:21:51.734 }, 00:21:51.734 "auth": { 00:21:51.734 "state": "completed", 00:21:51.734 "digest": "sha512", 00:21:51.734 "dhgroup": "ffdhe2048" 00:21:51.734 } 00:21:51.734 } 00:21:51.734 ]' 00:21:51.734 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:51.734 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:51.734 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:51.734 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:51.734 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:51.734 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.734 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.734 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.994 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: --dhchap-ctrl-secret DHHC-1:01:Zjk2MjUzNDgwODBlMTkxMzhlYTVhZjMxNDRhNjE1YzdLP6Mb: 00:21:51.994 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: --dhchap-ctrl-secret DHHC-1:01:Zjk2MjUzNDgwODBlMTkxMzhlYTVhZjMxNDRhNjE1YzdLP6Mb: 00:21:52.565 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.565 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:52.565 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.565 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.565 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.565 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:52.565 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:52.565 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:52.825 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:52.825 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:52.825 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:52.825 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:52.825 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:52.825 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.825 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:52.825 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.826 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.826 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.826 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:52.826 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:52.826 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:53.086 00:21:53.086 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:53.086 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.086 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:53.347 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.347 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.347 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.347 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.347 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.347 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:53.347 { 00:21:53.347 "cntlid": 111, 00:21:53.347 "qid": 0, 00:21:53.347 "state": "enabled", 00:21:53.347 "thread": "nvmf_tgt_poll_group_000", 00:21:53.347 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:53.347 "listen_address": { 00:21:53.347 "trtype": "TCP", 00:21:53.347 "adrfam": "IPv4", 00:21:53.347 "traddr": "10.0.0.2", 00:21:53.347 "trsvcid": "4420" 00:21:53.347 }, 00:21:53.347 "peer_address": { 00:21:53.347 "trtype": "TCP", 00:21:53.347 "adrfam": "IPv4", 00:21:53.347 "traddr": "10.0.0.1", 00:21:53.347 "trsvcid": "58632" 00:21:53.347 }, 00:21:53.347 "auth": { 00:21:53.347 "state": "completed", 00:21:53.347 "digest": "sha512", 00:21:53.347 "dhgroup": "ffdhe2048" 00:21:53.347 } 00:21:53.347 } 00:21:53.347 ]' 00:21:53.347 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:53.347 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:53.347 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:53.347 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:53.347 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:53.347 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.347 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.347 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.609 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:21:53.609 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:21:54.179 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.179 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:54.179 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.179 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.179 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.179 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:54.179 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:54.179 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:54.179 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:54.438 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:54.438 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:54.438 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:54.438 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:54.438 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:54.438 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.438 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.438 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.438 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.438 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.438 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.438 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.438 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.698 00:21:54.698 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.698 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.698 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.958 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.958 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.958 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.958 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.958 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.958 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.958 { 00:21:54.958 "cntlid": 113, 00:21:54.958 "qid": 0, 00:21:54.958 "state": "enabled", 00:21:54.958 "thread": "nvmf_tgt_poll_group_000", 00:21:54.958 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:54.958 "listen_address": { 00:21:54.958 "trtype": "TCP", 00:21:54.958 "adrfam": "IPv4", 00:21:54.958 "traddr": "10.0.0.2", 00:21:54.958 "trsvcid": "4420" 00:21:54.958 }, 00:21:54.958 "peer_address": { 00:21:54.958 "trtype": "TCP", 00:21:54.958 "adrfam": "IPv4", 00:21:54.958 "traddr": "10.0.0.1", 00:21:54.958 "trsvcid": "40438" 00:21:54.958 }, 00:21:54.958 "auth": { 00:21:54.958 "state": "completed", 00:21:54.958 "digest": "sha512", 00:21:54.958 "dhgroup": "ffdhe3072" 00:21:54.958 } 00:21:54.958 } 00:21:54.958 ]' 00:21:54.958 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.958 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:54.958 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.958 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:54.958 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.958 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.958 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.958 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.262 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWNlMzYzMDI1ODU0MThkNDcyMTk2MjM2NTNjYTkzN2I1NzkxMzVhN2NmNDY3NDQ3xIh/KQ==: --dhchap-ctrl-secret DHHC-1:03:YjgwZjc1OTRlYmNmYjk4ZWIyMjkwZWIxNTk5NzNhOTgzZDQzYWE1OGRjYzZlODAyOGQyMDliNmU0MjVmNGJlOfMt/Mk=: 00:21:55.262 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YWNlMzYzMDI1ODU0MThkNDcyMTk2MjM2NTNjYTkzN2I1NzkxMzVhN2NmNDY3NDQ3xIh/KQ==: --dhchap-ctrl-secret DHHC-1:03:YjgwZjc1OTRlYmNmYjk4ZWIyMjkwZWIxNTk5NzNhOTgzZDQzYWE1OGRjYzZlODAyOGQyMDliNmU0MjVmNGJlOfMt/Mk=: 00:21:55.831 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.831 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:55.831 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.831 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.831 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.831 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:55.831 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:55.831 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:56.091 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:56.091 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:56.091 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:56.091 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:56.091 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:56.091 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.091 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.091 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.091 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.091 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.091 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.091 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.091 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.350 00:21:56.350 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:56.350 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:56.350 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.609 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.609 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.609 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.609 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.609 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.609 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:56.609 { 00:21:56.609 "cntlid": 115, 00:21:56.609 "qid": 0, 00:21:56.609 "state": "enabled", 00:21:56.609 "thread": "nvmf_tgt_poll_group_000", 00:21:56.609 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:56.609 "listen_address": { 00:21:56.609 "trtype": "TCP", 00:21:56.609 "adrfam": "IPv4", 00:21:56.609 "traddr": "10.0.0.2", 00:21:56.609 "trsvcid": "4420" 00:21:56.609 }, 00:21:56.609 "peer_address": { 00:21:56.609 "trtype": "TCP", 00:21:56.609 "adrfam": "IPv4", 00:21:56.609 "traddr": "10.0.0.1", 00:21:56.609 "trsvcid": "40468" 00:21:56.609 }, 00:21:56.609 "auth": { 00:21:56.609 "state": "completed", 00:21:56.609 "digest": "sha512", 00:21:56.609 "dhgroup": "ffdhe3072" 00:21:56.609 } 00:21:56.609 } 00:21:56.609 ]' 00:21:56.609 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:56.609 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:56.609 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.609 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:56.609 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.609 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.609 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.609 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.868 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: --dhchap-ctrl-secret DHHC-1:02:YmM5ZjllNzdlYzA2OTU3M2VjMThlYzc2MTE1ZTI4YWFmZTVlZTg5NmM1Yjg2NmNk0/oG9A==: 00:21:56.868 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: --dhchap-ctrl-secret DHHC-1:02:YmM5ZjllNzdlYzA2OTU3M2VjMThlYzc2MTE1ZTI4YWFmZTVlZTg5NmM1Yjg2NmNk0/oG9A==: 00:21:57.439 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.439 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:57.439 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.439 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.439 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.439 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:57.439 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:57.439 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:57.699 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:57.699 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.699 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:57.699 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:57.699 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:57.699 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.699 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.699 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.699 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.699 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.699 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.699 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.699 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.958 00:21:57.959 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:57.959 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.959 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.221 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.221 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.221 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.221 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.221 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.221 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:58.221 { 00:21:58.221 "cntlid": 117, 00:21:58.221 "qid": 0, 00:21:58.221 "state": "enabled", 00:21:58.221 "thread": "nvmf_tgt_poll_group_000", 00:21:58.221 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:58.221 "listen_address": { 00:21:58.221 "trtype": "TCP", 00:21:58.221 "adrfam": "IPv4", 00:21:58.221 "traddr": "10.0.0.2", 00:21:58.221 "trsvcid": "4420" 00:21:58.221 }, 00:21:58.221 "peer_address": { 00:21:58.221 "trtype": "TCP", 00:21:58.221 "adrfam": "IPv4", 00:21:58.221 "traddr": "10.0.0.1", 00:21:58.221 "trsvcid": "40484" 00:21:58.221 }, 00:21:58.221 "auth": { 00:21:58.221 "state": "completed", 00:21:58.221 "digest": "sha512", 00:21:58.221 "dhgroup": "ffdhe3072" 00:21:58.221 } 00:21:58.221 } 00:21:58.221 ]' 00:21:58.221 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:58.221 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.221 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:58.221 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:58.221 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:58.221 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.221 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.221 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.480 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: --dhchap-ctrl-secret DHHC-1:01:Zjk2MjUzNDgwODBlMTkxMzhlYTVhZjMxNDRhNjE1YzdLP6Mb: 00:21:58.480 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: --dhchap-ctrl-secret DHHC-1:01:Zjk2MjUzNDgwODBlMTkxMzhlYTVhZjMxNDRhNjE1YzdLP6Mb: 00:21:59.050 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.050 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:59.050 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.050 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.050 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.050 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:59.050 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:59.050 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:59.310 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:59.310 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:59.310 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:59.310 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:59.310 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:59.310 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.310 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:59.310 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.310 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.310 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.310 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:59.310 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:59.310 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:59.570 00:21:59.571 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:59.571 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:59.571 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.831 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.831 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.831 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.831 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.831 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.831 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:59.831 { 00:21:59.831 "cntlid": 119, 00:21:59.831 "qid": 0, 00:21:59.831 "state": "enabled", 00:21:59.831 "thread": "nvmf_tgt_poll_group_000", 00:21:59.831 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:59.831 "listen_address": { 00:21:59.831 "trtype": "TCP", 00:21:59.831 "adrfam": "IPv4", 00:21:59.831 "traddr": "10.0.0.2", 00:21:59.831 "trsvcid": "4420" 00:21:59.831 }, 00:21:59.831 "peer_address": { 00:21:59.831 "trtype": "TCP", 00:21:59.831 "adrfam": "IPv4", 00:21:59.831 "traddr": "10.0.0.1", 00:21:59.831 "trsvcid": "40500" 00:21:59.831 }, 00:21:59.831 "auth": { 00:21:59.831 "state": "completed", 00:21:59.831 "digest": "sha512", 00:21:59.831 "dhgroup": "ffdhe3072" 00:21:59.831 } 00:21:59.831 } 00:21:59.831 ]' 00:21:59.831 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:59.831 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:59.831 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:59.831 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:59.831 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:59.831 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.831 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.831 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.091 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:22:00.091 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:22:00.661 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.661 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:00.661 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.661 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.661 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.661 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:00.661 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:00.661 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:00.661 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:00.921 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:22:00.921 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:00.921 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:00.921 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:00.921 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:00.921 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.921 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.921 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.921 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.921 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.921 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.921 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.921 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.181 00:22:01.181 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:01.181 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:01.181 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.442 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.442 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.442 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.442 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.442 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.442 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:01.442 { 00:22:01.442 "cntlid": 121, 00:22:01.442 "qid": 0, 00:22:01.442 "state": "enabled", 00:22:01.442 "thread": "nvmf_tgt_poll_group_000", 00:22:01.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:01.442 "listen_address": { 00:22:01.442 "trtype": "TCP", 00:22:01.442 "adrfam": "IPv4", 00:22:01.442 "traddr": "10.0.0.2", 00:22:01.442 "trsvcid": "4420" 00:22:01.442 }, 00:22:01.442 "peer_address": { 00:22:01.442 "trtype": "TCP", 00:22:01.442 "adrfam": "IPv4", 00:22:01.442 "traddr": "10.0.0.1", 00:22:01.442 "trsvcid": "40530" 00:22:01.442 }, 00:22:01.442 "auth": { 00:22:01.442 "state": "completed", 00:22:01.442 "digest": "sha512", 00:22:01.442 "dhgroup": "ffdhe4096" 00:22:01.442 } 00:22:01.442 } 00:22:01.442 ]' 00:22:01.442 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:01.442 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:01.442 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:01.442 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:01.442 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:01.442 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.442 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.442 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.702 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWNlMzYzMDI1ODU0MThkNDcyMTk2MjM2NTNjYTkzN2I1NzkxMzVhN2NmNDY3NDQ3xIh/KQ==: --dhchap-ctrl-secret DHHC-1:03:YjgwZjc1OTRlYmNmYjk4ZWIyMjkwZWIxNTk5NzNhOTgzZDQzYWE1OGRjYzZlODAyOGQyMDliNmU0MjVmNGJlOfMt/Mk=: 00:22:01.702 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YWNlMzYzMDI1ODU0MThkNDcyMTk2MjM2NTNjYTkzN2I1NzkxMzVhN2NmNDY3NDQ3xIh/KQ==: --dhchap-ctrl-secret DHHC-1:03:YjgwZjc1OTRlYmNmYjk4ZWIyMjkwZWIxNTk5NzNhOTgzZDQzYWE1OGRjYzZlODAyOGQyMDliNmU0MjVmNGJlOfMt/Mk=: 00:22:02.273 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.273 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:02.273 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.273 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.273 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.273 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:02.273 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:02.273 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:02.533 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:22:02.533 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.533 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:02.533 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:02.533 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:02.533 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.534 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.534 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.534 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.534 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.534 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.534 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.534 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.794 00:22:02.794 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:02.794 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:02.794 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.054 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.054 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.054 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.054 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.054 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.054 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:03.054 { 00:22:03.054 "cntlid": 123, 00:22:03.054 "qid": 0, 00:22:03.054 "state": "enabled", 00:22:03.054 "thread": "nvmf_tgt_poll_group_000", 00:22:03.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:03.054 "listen_address": { 00:22:03.054 "trtype": "TCP", 00:22:03.054 "adrfam": "IPv4", 00:22:03.054 "traddr": "10.0.0.2", 00:22:03.054 "trsvcid": "4420" 00:22:03.054 }, 00:22:03.054 "peer_address": { 00:22:03.054 "trtype": "TCP", 00:22:03.054 "adrfam": "IPv4", 00:22:03.054 "traddr": "10.0.0.1", 00:22:03.054 "trsvcid": "40552" 00:22:03.054 }, 00:22:03.054 "auth": { 00:22:03.054 "state": "completed", 00:22:03.054 "digest": "sha512", 00:22:03.054 "dhgroup": "ffdhe4096" 00:22:03.054 } 00:22:03.054 } 00:22:03.054 ]' 00:22:03.054 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:03.054 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:03.054 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:03.054 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:03.054 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:03.054 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.054 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.054 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.347 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: --dhchap-ctrl-secret DHHC-1:02:YmM5ZjllNzdlYzA2OTU3M2VjMThlYzc2MTE1ZTI4YWFmZTVlZTg5NmM1Yjg2NmNk0/oG9A==: 00:22:03.347 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: --dhchap-ctrl-secret DHHC-1:02:YmM5ZjllNzdlYzA2OTU3M2VjMThlYzc2MTE1ZTI4YWFmZTVlZTg5NmM1Yjg2NmNk0/oG9A==: 00:22:03.918 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.918 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:03.918 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.918 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.918 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.918 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:03.918 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:03.918 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:04.179 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:22:04.179 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:04.179 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:04.179 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:04.179 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:04.179 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.179 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.179 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.179 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.179 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.179 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.179 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.179 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.439 00:22:04.439 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:04.439 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:04.439 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.700 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.700 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.700 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.700 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.700 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.700 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:04.700 { 00:22:04.700 "cntlid": 125, 00:22:04.700 "qid": 0, 00:22:04.700 "state": "enabled", 00:22:04.700 "thread": "nvmf_tgt_poll_group_000", 00:22:04.700 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:04.700 "listen_address": { 00:22:04.700 "trtype": "TCP", 00:22:04.700 "adrfam": "IPv4", 00:22:04.700 "traddr": "10.0.0.2", 00:22:04.700 "trsvcid": "4420" 00:22:04.700 }, 00:22:04.700 "peer_address": { 00:22:04.700 "trtype": "TCP", 00:22:04.700 "adrfam": "IPv4", 00:22:04.700 "traddr": "10.0.0.1", 00:22:04.700 "trsvcid": "58554" 00:22:04.700 }, 00:22:04.700 "auth": { 00:22:04.700 "state": "completed", 00:22:04.700 "digest": "sha512", 00:22:04.700 "dhgroup": "ffdhe4096" 00:22:04.700 } 00:22:04.700 } 00:22:04.700 ]' 00:22:04.700 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:04.700 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:04.700 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:04.700 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:04.700 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:04.700 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.700 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.700 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.959 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: --dhchap-ctrl-secret DHHC-1:01:Zjk2MjUzNDgwODBlMTkxMzhlYTVhZjMxNDRhNjE1YzdLP6Mb: 00:22:04.960 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: --dhchap-ctrl-secret DHHC-1:01:Zjk2MjUzNDgwODBlMTkxMzhlYTVhZjMxNDRhNjE1YzdLP6Mb: 00:22:05.530 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.530 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:05.530 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.530 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.530 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.530 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:05.530 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:05.530 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:05.791 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:22:05.791 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:05.791 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:05.791 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:05.791 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:05.791 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.791 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:05.791 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.791 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.791 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.791 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:05.791 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:05.791 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:06.051 00:22:06.051 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:06.051 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:06.051 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.312 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.312 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.312 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.312 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.312 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.312 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:06.312 { 00:22:06.312 "cntlid": 127, 00:22:06.312 "qid": 0, 00:22:06.312 "state": "enabled", 00:22:06.312 "thread": "nvmf_tgt_poll_group_000", 00:22:06.312 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:06.312 "listen_address": { 00:22:06.312 "trtype": "TCP", 00:22:06.312 "adrfam": "IPv4", 00:22:06.312 "traddr": "10.0.0.2", 00:22:06.312 "trsvcid": "4420" 00:22:06.312 }, 00:22:06.312 "peer_address": { 00:22:06.312 "trtype": "TCP", 00:22:06.312 "adrfam": "IPv4", 00:22:06.312 "traddr": "10.0.0.1", 00:22:06.312 "trsvcid": "58582" 00:22:06.312 }, 00:22:06.312 "auth": { 00:22:06.312 "state": "completed", 00:22:06.312 "digest": "sha512", 00:22:06.312 "dhgroup": "ffdhe4096" 00:22:06.312 } 00:22:06.312 } 00:22:06.312 ]' 00:22:06.312 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:06.312 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:06.312 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:06.312 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:06.312 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:06.312 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.312 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.312 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.572 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:22:06.572 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:22:07.143 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.143 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:07.143 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.143 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.143 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.143 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:07.143 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:07.143 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:07.143 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:07.403 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:22:07.403 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:07.403 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:07.403 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:07.403 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:07.403 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.403 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.404 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.404 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.404 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.404 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.404 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.404 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.664 00:22:07.664 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:07.664 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:07.664 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.925 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.925 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.925 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.925 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.925 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.925 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:07.925 { 00:22:07.925 "cntlid": 129, 00:22:07.925 "qid": 0, 00:22:07.925 "state": "enabled", 00:22:07.925 "thread": "nvmf_tgt_poll_group_000", 00:22:07.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:07.925 "listen_address": { 00:22:07.925 "trtype": "TCP", 00:22:07.925 "adrfam": "IPv4", 00:22:07.925 "traddr": "10.0.0.2", 00:22:07.925 "trsvcid": "4420" 00:22:07.925 }, 00:22:07.925 "peer_address": { 00:22:07.925 "trtype": "TCP", 00:22:07.925 "adrfam": "IPv4", 00:22:07.925 "traddr": "10.0.0.1", 00:22:07.925 "trsvcid": "58622" 00:22:07.925 }, 00:22:07.925 "auth": { 00:22:07.925 "state": "completed", 00:22:07.925 "digest": "sha512", 00:22:07.925 "dhgroup": "ffdhe6144" 00:22:07.925 } 00:22:07.925 } 00:22:07.925 ]' 00:22:07.925 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:07.925 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:07.925 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:07.925 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:08.185 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:08.185 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.185 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.185 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.185 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWNlMzYzMDI1ODU0MThkNDcyMTk2MjM2NTNjYTkzN2I1NzkxMzVhN2NmNDY3NDQ3xIh/KQ==: --dhchap-ctrl-secret DHHC-1:03:YjgwZjc1OTRlYmNmYjk4ZWIyMjkwZWIxNTk5NzNhOTgzZDQzYWE1OGRjYzZlODAyOGQyMDliNmU0MjVmNGJlOfMt/Mk=: 00:22:08.185 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YWNlMzYzMDI1ODU0MThkNDcyMTk2MjM2NTNjYTkzN2I1NzkxMzVhN2NmNDY3NDQ3xIh/KQ==: --dhchap-ctrl-secret DHHC-1:03:YjgwZjc1OTRlYmNmYjk4ZWIyMjkwZWIxNTk5NzNhOTgzZDQzYWE1OGRjYzZlODAyOGQyMDliNmU0MjVmNGJlOfMt/Mk=: 00:22:09.127 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.127 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:09.127 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.127 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.127 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.127 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:09.127 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:09.127 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:09.127 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:22:09.127 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:09.127 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:09.127 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:09.127 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:09.127 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.127 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.127 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.127 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.127 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.127 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.127 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.127 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.387 00:22:09.387 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:09.387 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:09.387 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.647 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.647 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.647 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.647 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.647 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.647 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:09.647 { 00:22:09.647 "cntlid": 131, 00:22:09.647 "qid": 0, 00:22:09.647 "state": "enabled", 00:22:09.647 "thread": "nvmf_tgt_poll_group_000", 00:22:09.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:09.647 "listen_address": { 00:22:09.647 "trtype": "TCP", 00:22:09.647 "adrfam": "IPv4", 00:22:09.647 "traddr": "10.0.0.2", 00:22:09.647 "trsvcid": "4420" 00:22:09.647 }, 00:22:09.647 "peer_address": { 00:22:09.647 "trtype": "TCP", 00:22:09.647 "adrfam": "IPv4", 00:22:09.647 "traddr": "10.0.0.1", 00:22:09.647 "trsvcid": "58634" 00:22:09.647 }, 00:22:09.647 "auth": { 00:22:09.647 "state": "completed", 00:22:09.647 "digest": "sha512", 00:22:09.647 "dhgroup": "ffdhe6144" 00:22:09.647 } 00:22:09.647 } 00:22:09.647 ]' 00:22:09.647 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:09.648 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:09.648 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:09.908 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:09.908 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:09.908 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.908 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.908 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.908 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: --dhchap-ctrl-secret DHHC-1:02:YmM5ZjllNzdlYzA2OTU3M2VjMThlYzc2MTE1ZTI4YWFmZTVlZTg5NmM1Yjg2NmNk0/oG9A==: 00:22:09.908 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: --dhchap-ctrl-secret DHHC-1:02:YmM5ZjllNzdlYzA2OTU3M2VjMThlYzc2MTE1ZTI4YWFmZTVlZTg5NmM1Yjg2NmNk0/oG9A==: 00:22:10.849 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.849 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:10.849 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.850 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.850 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.850 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:10.850 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:10.850 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:10.850 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:22:10.850 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:10.850 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:10.850 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:10.850 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:10.850 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.850 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.850 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.850 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.850 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.850 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.850 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.850 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.110 00:22:11.110 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:11.110 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:11.110 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.370 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.370 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.370 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.370 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.370 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.370 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:11.370 { 00:22:11.370 "cntlid": 133, 00:22:11.370 "qid": 0, 00:22:11.370 "state": "enabled", 00:22:11.370 "thread": "nvmf_tgt_poll_group_000", 00:22:11.370 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:11.370 "listen_address": { 00:22:11.370 "trtype": "TCP", 00:22:11.370 "adrfam": "IPv4", 00:22:11.370 "traddr": "10.0.0.2", 00:22:11.370 "trsvcid": "4420" 00:22:11.370 }, 00:22:11.370 "peer_address": { 00:22:11.370 "trtype": "TCP", 00:22:11.370 "adrfam": "IPv4", 00:22:11.370 "traddr": "10.0.0.1", 00:22:11.370 "trsvcid": "58666" 00:22:11.370 }, 00:22:11.370 "auth": { 00:22:11.370 "state": "completed", 00:22:11.370 "digest": "sha512", 00:22:11.370 "dhgroup": "ffdhe6144" 00:22:11.370 } 00:22:11.370 } 00:22:11.370 ]' 00:22:11.370 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:11.370 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:11.370 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:11.631 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:11.631 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:11.631 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.631 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.631 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.892 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: --dhchap-ctrl-secret DHHC-1:01:Zjk2MjUzNDgwODBlMTkxMzhlYTVhZjMxNDRhNjE1YzdLP6Mb: 00:22:11.892 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: --dhchap-ctrl-secret DHHC-1:01:Zjk2MjUzNDgwODBlMTkxMzhlYTVhZjMxNDRhNjE1YzdLP6Mb: 00:22:12.461 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.461 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:12.461 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.461 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.461 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.461 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:12.461 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:12.461 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:12.722 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:22:12.722 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:12.722 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:12.722 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:12.722 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:12.722 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.722 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:12.722 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.722 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.722 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.722 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:12.722 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:12.722 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:12.982 00:22:12.982 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:12.982 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:12.982 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.242 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.242 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.242 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.242 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.242 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.242 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:13.242 { 00:22:13.242 "cntlid": 135, 00:22:13.242 "qid": 0, 00:22:13.242 "state": "enabled", 00:22:13.242 "thread": "nvmf_tgt_poll_group_000", 00:22:13.242 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:13.242 "listen_address": { 00:22:13.242 "trtype": "TCP", 00:22:13.242 "adrfam": "IPv4", 00:22:13.242 "traddr": "10.0.0.2", 00:22:13.242 "trsvcid": "4420" 00:22:13.242 }, 00:22:13.242 "peer_address": { 00:22:13.242 "trtype": "TCP", 00:22:13.242 "adrfam": "IPv4", 00:22:13.242 "traddr": "10.0.0.1", 00:22:13.242 "trsvcid": "58688" 00:22:13.242 }, 00:22:13.242 "auth": { 00:22:13.242 "state": "completed", 00:22:13.242 "digest": "sha512", 00:22:13.242 "dhgroup": "ffdhe6144" 00:22:13.242 } 00:22:13.242 } 00:22:13.242 ]' 00:22:13.242 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:13.242 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:13.242 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:13.242 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:13.242 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:13.242 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.242 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.242 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.502 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:22:13.502 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:22:14.073 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.073 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:14.073 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.073 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.073 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.073 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:14.073 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:14.073 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:14.073 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:14.333 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:22:14.333 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:14.333 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:14.333 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:14.333 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:14.333 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.333 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.333 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.333 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.333 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.333 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.334 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.334 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.905 00:22:14.905 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:14.905 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:14.905 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.905 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.905 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.905 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.905 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.905 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.905 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:14.905 { 00:22:14.905 "cntlid": 137, 00:22:14.905 "qid": 0, 00:22:14.905 "state": "enabled", 00:22:14.905 "thread": "nvmf_tgt_poll_group_000", 00:22:14.905 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:14.905 "listen_address": { 00:22:14.905 "trtype": "TCP", 00:22:14.905 "adrfam": "IPv4", 00:22:14.905 "traddr": "10.0.0.2", 00:22:14.905 "trsvcid": "4420" 00:22:14.905 }, 00:22:14.905 "peer_address": { 00:22:14.905 "trtype": "TCP", 00:22:14.905 "adrfam": "IPv4", 00:22:14.905 "traddr": "10.0.0.1", 00:22:14.905 "trsvcid": "36160" 00:22:14.905 }, 00:22:14.905 "auth": { 00:22:14.905 "state": "completed", 00:22:14.905 "digest": "sha512", 00:22:14.905 "dhgroup": "ffdhe8192" 00:22:14.905 } 00:22:14.905 } 00:22:14.905 ]' 00:22:14.905 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:14.905 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:15.166 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:15.166 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:15.166 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:15.166 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.166 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.166 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.439 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWNlMzYzMDI1ODU0MThkNDcyMTk2MjM2NTNjYTkzN2I1NzkxMzVhN2NmNDY3NDQ3xIh/KQ==: --dhchap-ctrl-secret DHHC-1:03:YjgwZjc1OTRlYmNmYjk4ZWIyMjkwZWIxNTk5NzNhOTgzZDQzYWE1OGRjYzZlODAyOGQyMDliNmU0MjVmNGJlOfMt/Mk=: 00:22:15.439 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YWNlMzYzMDI1ODU0MThkNDcyMTk2MjM2NTNjYTkzN2I1NzkxMzVhN2NmNDY3NDQ3xIh/KQ==: --dhchap-ctrl-secret DHHC-1:03:YjgwZjc1OTRlYmNmYjk4ZWIyMjkwZWIxNTk5NzNhOTgzZDQzYWE1OGRjYzZlODAyOGQyMDliNmU0MjVmNGJlOfMt/Mk=: 00:22:16.010 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.010 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:16.010 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.010 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.010 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.010 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:16.010 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:16.010 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:16.271 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:22:16.271 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:16.271 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:16.271 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:16.271 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:16.271 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.271 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.271 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.271 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.271 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.271 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.271 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.271 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.531 00:22:16.531 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:16.531 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:16.531 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.791 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.791 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.791 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.791 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.791 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.791 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:16.791 { 00:22:16.791 "cntlid": 139, 00:22:16.791 "qid": 0, 00:22:16.791 "state": "enabled", 00:22:16.791 "thread": "nvmf_tgt_poll_group_000", 00:22:16.791 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:16.791 "listen_address": { 00:22:16.791 "trtype": "TCP", 00:22:16.791 "adrfam": "IPv4", 00:22:16.791 "traddr": "10.0.0.2", 00:22:16.791 "trsvcid": "4420" 00:22:16.791 }, 00:22:16.791 "peer_address": { 00:22:16.791 "trtype": "TCP", 00:22:16.791 "adrfam": "IPv4", 00:22:16.791 "traddr": "10.0.0.1", 00:22:16.791 "trsvcid": "36176" 00:22:16.791 }, 00:22:16.791 "auth": { 00:22:16.791 "state": "completed", 00:22:16.791 "digest": "sha512", 00:22:16.791 "dhgroup": "ffdhe8192" 00:22:16.791 } 00:22:16.791 } 00:22:16.791 ]' 00:22:16.791 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:16.791 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:16.791 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:17.052 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:17.052 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:17.052 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.052 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.052 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.052 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: --dhchap-ctrl-secret DHHC-1:02:YmM5ZjllNzdlYzA2OTU3M2VjMThlYzc2MTE1ZTI4YWFmZTVlZTg5NmM1Yjg2NmNk0/oG9A==: 00:22:17.052 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: --dhchap-ctrl-secret DHHC-1:02:YmM5ZjllNzdlYzA2OTU3M2VjMThlYzc2MTE1ZTI4YWFmZTVlZTg5NmM1Yjg2NmNk0/oG9A==: 00:22:17.994 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.994 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:17.994 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.994 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.994 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.994 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:17.994 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:17.994 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:17.994 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:22:17.994 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:17.994 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:17.994 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:17.994 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:17.994 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.994 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.994 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.994 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.994 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.994 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.994 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.994 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:18.572 00:22:18.572 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:18.572 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:18.572 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.572 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.572 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.572 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.572 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.833 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.833 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:18.833 { 00:22:18.833 "cntlid": 141, 00:22:18.833 "qid": 0, 00:22:18.833 "state": "enabled", 00:22:18.833 "thread": "nvmf_tgt_poll_group_000", 00:22:18.833 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:18.833 "listen_address": { 00:22:18.833 "trtype": "TCP", 00:22:18.833 "adrfam": "IPv4", 00:22:18.833 "traddr": "10.0.0.2", 00:22:18.833 "trsvcid": "4420" 00:22:18.833 }, 00:22:18.833 "peer_address": { 00:22:18.833 "trtype": "TCP", 00:22:18.833 "adrfam": "IPv4", 00:22:18.833 "traddr": "10.0.0.1", 00:22:18.833 "trsvcid": "36186" 00:22:18.833 }, 00:22:18.833 "auth": { 00:22:18.833 "state": "completed", 00:22:18.833 "digest": "sha512", 00:22:18.833 "dhgroup": "ffdhe8192" 00:22:18.833 } 00:22:18.833 } 00:22:18.833 ]' 00:22:18.833 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:18.833 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:18.833 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:18.833 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:18.833 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:18.833 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.833 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.833 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.095 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: --dhchap-ctrl-secret DHHC-1:01:Zjk2MjUzNDgwODBlMTkxMzhlYTVhZjMxNDRhNjE1YzdLP6Mb: 00:22:19.095 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: --dhchap-ctrl-secret DHHC-1:01:Zjk2MjUzNDgwODBlMTkxMzhlYTVhZjMxNDRhNjE1YzdLP6Mb: 00:22:19.667 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.667 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:19.667 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.667 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.667 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.667 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:19.667 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:19.667 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:19.928 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:22:19.928 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:19.928 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:19.928 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:19.928 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:19.929 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:19.929 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:19.929 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.929 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.929 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.929 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:19.929 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:19.929 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:20.189 00:22:20.451 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:20.451 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:20.451 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.451 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.451 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:20.451 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.451 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.451 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.451 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:20.451 { 00:22:20.451 "cntlid": 143, 00:22:20.451 "qid": 0, 00:22:20.451 "state": "enabled", 00:22:20.451 "thread": "nvmf_tgt_poll_group_000", 00:22:20.451 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:20.451 "listen_address": { 00:22:20.451 "trtype": "TCP", 00:22:20.451 "adrfam": "IPv4", 00:22:20.451 "traddr": "10.0.0.2", 00:22:20.451 "trsvcid": "4420" 00:22:20.451 }, 00:22:20.451 "peer_address": { 00:22:20.451 "trtype": "TCP", 00:22:20.451 "adrfam": "IPv4", 00:22:20.451 "traddr": "10.0.0.1", 00:22:20.451 "trsvcid": "36196" 00:22:20.451 }, 00:22:20.451 "auth": { 00:22:20.451 "state": "completed", 00:22:20.451 "digest": "sha512", 00:22:20.451 "dhgroup": "ffdhe8192" 00:22:20.451 } 00:22:20.451 } 00:22:20.451 ]' 00:22:20.451 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:20.451 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:20.451 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:20.711 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:20.711 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:20.711 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:20.711 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.711 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.972 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:22:20.972 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:22:21.545 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.545 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:21.545 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.545 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.545 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.545 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:21.545 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:21.545 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:21.545 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:21.545 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:21.545 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:21.807 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:21.807 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:21.807 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:21.807 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:21.807 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:21.807 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:21.807 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.807 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.807 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.807 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.807 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.807 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.807 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.067 00:22:22.067 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:22.067 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:22.067 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.327 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.328 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:22.328 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.328 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.328 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.328 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:22.328 { 00:22:22.328 "cntlid": 145, 00:22:22.328 "qid": 0, 00:22:22.328 "state": "enabled", 00:22:22.328 "thread": "nvmf_tgt_poll_group_000", 00:22:22.328 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:22.328 "listen_address": { 00:22:22.328 "trtype": "TCP", 00:22:22.328 "adrfam": "IPv4", 00:22:22.328 "traddr": "10.0.0.2", 00:22:22.328 "trsvcid": "4420" 00:22:22.328 }, 00:22:22.328 "peer_address": { 00:22:22.328 "trtype": "TCP", 00:22:22.328 "adrfam": "IPv4", 00:22:22.328 "traddr": "10.0.0.1", 00:22:22.328 "trsvcid": "36226" 00:22:22.328 }, 00:22:22.328 "auth": { 00:22:22.328 "state": "completed", 00:22:22.328 "digest": "sha512", 00:22:22.328 "dhgroup": "ffdhe8192" 00:22:22.328 } 00:22:22.328 } 00:22:22.328 ]' 00:22:22.328 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:22.328 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:22.328 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:22.328 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:22.328 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:22.589 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:22.589 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.589 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.589 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWNlMzYzMDI1ODU0MThkNDcyMTk2MjM2NTNjYTkzN2I1NzkxMzVhN2NmNDY3NDQ3xIh/KQ==: --dhchap-ctrl-secret DHHC-1:03:YjgwZjc1OTRlYmNmYjk4ZWIyMjkwZWIxNTk5NzNhOTgzZDQzYWE1OGRjYzZlODAyOGQyMDliNmU0MjVmNGJlOfMt/Mk=: 00:22:22.589 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YWNlMzYzMDI1ODU0MThkNDcyMTk2MjM2NTNjYTkzN2I1NzkxMzVhN2NmNDY3NDQ3xIh/KQ==: --dhchap-ctrl-secret DHHC-1:03:YjgwZjc1OTRlYmNmYjk4ZWIyMjkwZWIxNTk5NzNhOTgzZDQzYWE1OGRjYzZlODAyOGQyMDliNmU0MjVmNGJlOfMt/Mk=: 00:22:23.532 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.532 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:23.532 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.532 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.532 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.532 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:22:23.532 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.532 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.532 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.532 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:23.532 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:23.532 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:23.532 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:23.532 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:23.532 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:23.532 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:23.532 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:23.532 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:23.532 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:23.794 request: 00:22:23.794 { 00:22:23.794 "name": "nvme0", 00:22:23.794 "trtype": "tcp", 00:22:23.794 "traddr": "10.0.0.2", 00:22:23.794 "adrfam": "ipv4", 00:22:23.794 "trsvcid": "4420", 00:22:23.794 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:23.794 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:23.794 "prchk_reftag": false, 00:22:23.794 "prchk_guard": false, 00:22:23.794 "hdgst": false, 00:22:23.794 "ddgst": false, 00:22:23.794 "dhchap_key": "key2", 00:22:23.794 "allow_unrecognized_csi": false, 00:22:23.794 "method": "bdev_nvme_attach_controller", 00:22:23.794 "req_id": 1 00:22:23.794 } 00:22:23.794 Got JSON-RPC error response 00:22:23.794 response: 00:22:23.794 { 00:22:23.794 "code": -5, 00:22:23.794 "message": "Input/output error" 00:22:23.794 } 00:22:23.794 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:23.794 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:23.794 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:23.794 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:23.794 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:23.794 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.794 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.794 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.794 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.794 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.794 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.794 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.794 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:23.794 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:23.794 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:23.794 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:23.794 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:23.794 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:23.794 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:23.794 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:23.794 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:23.794 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:24.368 request: 00:22:24.368 { 00:22:24.368 "name": "nvme0", 00:22:24.368 "trtype": "tcp", 00:22:24.368 "traddr": "10.0.0.2", 00:22:24.368 "adrfam": "ipv4", 00:22:24.368 "trsvcid": "4420", 00:22:24.368 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:24.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:24.368 "prchk_reftag": false, 00:22:24.368 "prchk_guard": false, 00:22:24.368 "hdgst": false, 00:22:24.368 "ddgst": false, 00:22:24.368 "dhchap_key": "key1", 00:22:24.368 "dhchap_ctrlr_key": "ckey2", 00:22:24.368 "allow_unrecognized_csi": false, 00:22:24.368 "method": "bdev_nvme_attach_controller", 00:22:24.368 "req_id": 1 00:22:24.368 } 00:22:24.368 Got JSON-RPC error response 00:22:24.368 response: 00:22:24.368 { 00:22:24.368 "code": -5, 00:22:24.368 "message": "Input/output error" 00:22:24.368 } 00:22:24.368 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:24.368 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:24.368 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:24.368 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:24.368 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:24.368 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.368 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.368 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.368 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:22:24.368 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.368 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.368 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.368 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.368 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:24.368 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.368 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:24.368 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:24.368 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:24.368 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:24.368 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.368 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.368 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.629 request: 00:22:24.629 { 00:22:24.629 "name": "nvme0", 00:22:24.629 "trtype": "tcp", 00:22:24.629 "traddr": "10.0.0.2", 00:22:24.630 "adrfam": "ipv4", 00:22:24.630 "trsvcid": "4420", 00:22:24.630 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:24.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:24.630 "prchk_reftag": false, 00:22:24.630 "prchk_guard": false, 00:22:24.630 "hdgst": false, 00:22:24.630 "ddgst": false, 00:22:24.630 "dhchap_key": "key1", 00:22:24.630 "dhchap_ctrlr_key": "ckey1", 00:22:24.630 "allow_unrecognized_csi": false, 00:22:24.630 "method": "bdev_nvme_attach_controller", 00:22:24.630 "req_id": 1 00:22:24.630 } 00:22:24.630 Got JSON-RPC error response 00:22:24.630 response: 00:22:24.630 { 00:22:24.630 "code": -5, 00:22:24.630 "message": "Input/output error" 00:22:24.630 } 00:22:24.630 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:24.630 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:24.630 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:24.630 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:24.630 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:24.630 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.630 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.892 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.892 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2366831 00:22:24.892 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2366831 ']' 00:22:24.892 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2366831 00:22:24.892 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:24.892 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:24.892 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2366831 00:22:24.892 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:24.892 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:24.892 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2366831' 00:22:24.892 killing process with pid 2366831 00:22:24.892 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2366831 00:22:24.892 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2366831 00:22:24.892 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:24.892 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:24.892 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:24.892 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.892 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2392602 00:22:24.892 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2392602 00:22:24.892 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:24.892 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2392602 ']' 00:22:24.892 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.892 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:24.892 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.892 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:24.892 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.841 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:25.841 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:25.841 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:25.841 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:25.841 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.841 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:25.841 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:25.841 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2392602 00:22:25.841 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2392602 ']' 00:22:25.842 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:25.842 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:25.842 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:25.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:25.842 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:25.842 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.156 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:26.156 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:26.156 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:26.156 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.156 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.156 null0 00:22:26.156 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.156 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:26.156 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.l2p 00:22:26.156 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.156 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.156 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.156 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.dM0 ]] 00:22:26.156 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dM0 00:22:26.156 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.156 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.156 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.156 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:26.156 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.9tL 00:22:26.156 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.156 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.156 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.156 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.O2P ]] 00:22:26.156 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.O2P 00:22:26.156 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.156 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.156 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.156 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:26.156 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.xh2 00:22:26.156 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.156 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.156 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.156 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Qfy ]] 00:22:26.156 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Qfy 00:22:26.156 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.156 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.157 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.157 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:26.157 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.hj6 00:22:26.157 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.157 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.157 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.157 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:26.157 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:26.157 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:26.157 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:26.157 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:26.157 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:26.157 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:26.157 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:26.157 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.157 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.157 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.157 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:26.157 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:26.157 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:27.143 nvme0n1 00:22:27.143 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:27.143 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:27.143 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.144 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.144 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:27.144 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.144 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.144 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.144 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:27.144 { 00:22:27.144 "cntlid": 1, 00:22:27.144 "qid": 0, 00:22:27.144 "state": "enabled", 00:22:27.144 "thread": "nvmf_tgt_poll_group_000", 00:22:27.144 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:27.144 "listen_address": { 00:22:27.144 "trtype": "TCP", 00:22:27.144 "adrfam": "IPv4", 00:22:27.144 "traddr": "10.0.0.2", 00:22:27.144 "trsvcid": "4420" 00:22:27.144 }, 00:22:27.144 "peer_address": { 00:22:27.144 "trtype": "TCP", 00:22:27.144 "adrfam": "IPv4", 00:22:27.144 "traddr": "10.0.0.1", 00:22:27.144 "trsvcid": "38790" 00:22:27.144 }, 00:22:27.144 "auth": { 00:22:27.144 "state": "completed", 00:22:27.144 "digest": "sha512", 00:22:27.144 "dhgroup": "ffdhe8192" 00:22:27.144 } 00:22:27.144 } 00:22:27.144 ]' 00:22:27.144 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:27.144 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:27.144 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:27.144 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:27.144 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:27.403 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.403 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.403 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:27.403 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:22:27.404 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:22:28.342 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:28.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:28.342 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:28.342 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.342 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.342 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.342 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:28.342 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.342 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.342 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.342 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:28.342 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:28.342 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:28.342 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:28.342 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:28.342 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:28.342 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:28.342 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:28.342 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:28.342 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:28.342 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:28.342 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:28.604 request: 00:22:28.604 { 00:22:28.604 "name": "nvme0", 00:22:28.604 "trtype": "tcp", 00:22:28.604 "traddr": "10.0.0.2", 00:22:28.604 "adrfam": "ipv4", 00:22:28.604 "trsvcid": "4420", 00:22:28.604 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:28.604 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:28.604 "prchk_reftag": false, 00:22:28.604 "prchk_guard": false, 00:22:28.604 "hdgst": false, 00:22:28.604 "ddgst": false, 00:22:28.604 "dhchap_key": "key3", 00:22:28.604 "allow_unrecognized_csi": false, 00:22:28.604 "method": "bdev_nvme_attach_controller", 00:22:28.604 "req_id": 1 00:22:28.604 } 00:22:28.604 Got JSON-RPC error response 00:22:28.604 response: 00:22:28.604 { 00:22:28.604 "code": -5, 00:22:28.604 "message": "Input/output error" 00:22:28.604 } 00:22:28.604 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:28.604 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:28.604 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:28.604 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:28.604 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:28.604 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:28.604 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:28.604 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:28.604 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:28.604 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:28.604 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:28.604 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:28.604 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:28.604 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:28.604 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:28.604 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:28.604 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:28.604 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:28.866 request: 00:22:28.866 { 00:22:28.866 "name": "nvme0", 00:22:28.866 "trtype": "tcp", 00:22:28.866 "traddr": "10.0.0.2", 00:22:28.866 "adrfam": "ipv4", 00:22:28.866 "trsvcid": "4420", 00:22:28.866 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:28.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:28.866 "prchk_reftag": false, 00:22:28.866 "prchk_guard": false, 00:22:28.866 "hdgst": false, 00:22:28.866 "ddgst": false, 00:22:28.866 "dhchap_key": "key3", 00:22:28.866 "allow_unrecognized_csi": false, 00:22:28.867 "method": "bdev_nvme_attach_controller", 00:22:28.867 "req_id": 1 00:22:28.867 } 00:22:28.867 Got JSON-RPC error response 00:22:28.867 response: 00:22:28.867 { 00:22:28.867 "code": -5, 00:22:28.867 "message": "Input/output error" 00:22:28.867 } 00:22:28.867 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:28.867 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:28.867 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:28.867 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:28.867 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:28.867 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:28.867 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:28.867 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:28.867 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:28.867 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:29.127 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:29.127 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.127 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.127 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.127 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:29.127 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.127 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.127 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.127 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:29.127 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:29.127 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:29.127 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:29.127 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:29.127 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:29.127 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:29.127 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:29.127 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:29.127 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:29.389 request: 00:22:29.389 { 00:22:29.389 "name": "nvme0", 00:22:29.389 "trtype": "tcp", 00:22:29.389 "traddr": "10.0.0.2", 00:22:29.389 "adrfam": "ipv4", 00:22:29.389 "trsvcid": "4420", 00:22:29.389 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:29.389 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:29.389 "prchk_reftag": false, 00:22:29.389 "prchk_guard": false, 00:22:29.389 "hdgst": false, 00:22:29.389 "ddgst": false, 00:22:29.389 "dhchap_key": "key0", 00:22:29.389 "dhchap_ctrlr_key": "key1", 00:22:29.389 "allow_unrecognized_csi": false, 00:22:29.389 "method": "bdev_nvme_attach_controller", 00:22:29.389 "req_id": 1 00:22:29.389 } 00:22:29.389 Got JSON-RPC error response 00:22:29.389 response: 00:22:29.389 { 00:22:29.389 "code": -5, 00:22:29.389 "message": "Input/output error" 00:22:29.389 } 00:22:29.389 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:29.389 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:29.389 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:29.389 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:29.389 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:29.389 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:29.389 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:29.650 nvme0n1 00:22:29.650 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:29.650 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:29.650 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.912 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.912 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:29.912 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:29.912 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:22:29.912 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.912 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.912 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.912 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:29.912 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:29.912 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:30.854 nvme0n1 00:22:30.854 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:30.854 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:30.855 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.855 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.855 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:30.855 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.855 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.855 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.855 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:30.855 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:30.855 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.115 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.115 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: --dhchap-ctrl-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:22:31.115 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: --dhchap-ctrl-secret DHHC-1:03:NDQ1MjI4N2RlMDA1MWJlMmVlMGFjYjI0MDA3ZmFkNmVlNDZhZDhlOGFjMjk1Njk5MDUxOTA1YjFkYmJiY2ZlZp2aNds=: 00:22:31.687 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:31.687 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:31.687 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:31.687 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:31.687 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:31.687 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:31.687 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:31.687 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:31.687 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.948 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:31.948 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:31.948 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:31.948 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:31.948 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:31.948 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:31.948 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:31.948 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:31.948 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:31.948 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:32.522 request: 00:22:32.522 { 00:22:32.522 "name": "nvme0", 00:22:32.522 "trtype": "tcp", 00:22:32.522 "traddr": "10.0.0.2", 00:22:32.522 "adrfam": "ipv4", 00:22:32.522 "trsvcid": "4420", 00:22:32.522 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:32.522 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:32.522 "prchk_reftag": false, 00:22:32.522 "prchk_guard": false, 00:22:32.522 "hdgst": false, 00:22:32.522 "ddgst": false, 00:22:32.522 "dhchap_key": "key1", 00:22:32.522 "allow_unrecognized_csi": false, 00:22:32.522 "method": "bdev_nvme_attach_controller", 00:22:32.522 "req_id": 1 00:22:32.522 } 00:22:32.522 Got JSON-RPC error response 00:22:32.522 response: 00:22:32.522 { 00:22:32.522 "code": -5, 00:22:32.522 "message": "Input/output error" 00:22:32.522 } 00:22:32.522 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:32.522 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:32.522 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:32.522 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:32.522 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:32.522 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:32.522 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:33.093 nvme0n1 00:22:33.093 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:33.093 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:33.093 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.353 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.353 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:33.353 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.614 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:33.614 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.614 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.614 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.614 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:33.614 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:33.614 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:33.875 nvme0n1 00:22:33.875 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:33.875 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:33.875 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.875 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.875 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:33.875 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:34.136 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:34.136 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.136 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.136 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.136 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: '' 2s 00:22:34.136 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:34.136 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:34.136 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: 00:22:34.136 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:34.136 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:34.136 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:34.136 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: ]] 00:22:34.136 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZjU0NWI5NjM2YjgwY2Y3ZmYwYThiZDljNTY2MDYxN2XXv5uu: 00:22:34.137 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:34.137 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:34.137 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:36.051 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:36.051 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:36.051 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:36.051 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:36.312 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:36.312 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:36.312 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:36.312 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:36.312 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.312 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.312 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.312 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: 2s 00:22:36.312 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:36.312 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:36.312 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:36.312 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: 00:22:36.312 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:36.312 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:36.312 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:36.312 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: ]] 00:22:36.312 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NWEwZTVjOTk3NzJhYTcwOTZjMWRiZjBhYTZjZDllY2E3Y2UzZDZmNzJhNDE4Mzg1QB+epw==: 00:22:36.312 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:36.312 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:38.226 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:38.226 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:38.226 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:38.226 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:38.226 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:38.226 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:38.226 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:38.226 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:38.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:38.226 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:38.226 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.226 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.226 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.226 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:38.226 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:38.226 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:39.167 nvme0n1 00:22:39.167 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:39.167 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.167 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.167 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.167 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:39.167 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:39.427 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:39.427 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:39.427 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.687 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.687 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:39.687 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.687 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.687 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.687 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:39.687 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:39.947 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:39.947 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:39.947 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.208 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.208 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:40.208 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.208 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.208 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.208 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:40.208 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:40.208 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:40.208 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:40.208 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:40.208 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:40.208 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:40.208 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:40.208 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:40.470 request: 00:22:40.470 { 00:22:40.470 "name": "nvme0", 00:22:40.470 "dhchap_key": "key1", 00:22:40.470 "dhchap_ctrlr_key": "key3", 00:22:40.470 "method": "bdev_nvme_set_keys", 00:22:40.470 "req_id": 1 00:22:40.470 } 00:22:40.470 Got JSON-RPC error response 00:22:40.470 response: 00:22:40.470 { 00:22:40.470 "code": -13, 00:22:40.470 "message": "Permission denied" 00:22:40.470 } 00:22:40.470 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:40.470 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:40.470 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:40.470 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:40.470 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:40.470 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:40.470 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.730 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:40.730 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:41.670 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:41.670 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:41.670 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:41.930 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:41.930 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:41.930 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.930 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.930 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.930 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:41.931 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:41.931 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:42.867 nvme0n1 00:22:42.868 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:42.868 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.868 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.868 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.868 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:42.868 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:42.868 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:42.868 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:42.868 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:42.868 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:42.868 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:42.868 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:42.868 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:43.170 request: 00:22:43.170 { 00:22:43.170 "name": "nvme0", 00:22:43.170 "dhchap_key": "key2", 00:22:43.170 "dhchap_ctrlr_key": "key0", 00:22:43.170 "method": "bdev_nvme_set_keys", 00:22:43.170 "req_id": 1 00:22:43.170 } 00:22:43.170 Got JSON-RPC error response 00:22:43.170 response: 00:22:43.170 { 00:22:43.170 "code": -13, 00:22:43.170 "message": "Permission denied" 00:22:43.170 } 00:22:43.170 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:43.170 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:43.170 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:43.170 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:43.170 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:43.170 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:43.171 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.430 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:43.430 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:44.368 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:44.368 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:44.368 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:44.627 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:44.627 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:44.627 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:44.627 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2366933 00:22:44.627 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2366933 ']' 00:22:44.627 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2366933 00:22:44.627 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:44.627 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:44.627 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2366933 00:22:44.627 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:44.627 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:44.627 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2366933' 00:22:44.627 killing process with pid 2366933 00:22:44.627 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2366933 00:22:44.627 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2366933 00:22:44.886 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:44.886 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:44.886 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:44.886 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:44.886 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:44.886 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:44.886 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:44.886 rmmod nvme_tcp 00:22:44.886 rmmod nvme_fabrics 00:22:44.886 rmmod nvme_keyring 00:22:44.886 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:44.886 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:44.886 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:44.886 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2392602 ']' 00:22:44.886 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2392602 00:22:44.886 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2392602 ']' 00:22:44.886 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2392602 00:22:44.886 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:44.886 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:44.886 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2392602 00:22:44.886 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:44.886 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:44.886 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2392602' 00:22:44.886 killing process with pid 2392602 00:22:44.886 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2392602 00:22:44.886 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2392602 00:22:45.147 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:45.147 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:45.147 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:45.147 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:45.147 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:22:45.147 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:45.147 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:45.147 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:45.147 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:45.147 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.147 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:45.147 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.058 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:47.058 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.l2p /tmp/spdk.key-sha256.9tL /tmp/spdk.key-sha384.xh2 /tmp/spdk.key-sha512.hj6 /tmp/spdk.key-sha512.dM0 /tmp/spdk.key-sha384.O2P /tmp/spdk.key-sha256.Qfy '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:47.058 00:22:47.058 real 2m37.597s 00:22:47.058 user 5m54.398s 00:22:47.058 sys 0m25.021s 00:22:47.058 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:47.058 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.058 ************************************ 00:22:47.058 END TEST nvmf_auth_target 00:22:47.058 ************************************ 00:22:47.318 07:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:47.319 ************************************ 00:22:47.319 START TEST nvmf_bdevio_no_huge 00:22:47.319 ************************************ 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:47.319 * Looking for test storage... 00:22:47.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:47.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.319 --rc genhtml_branch_coverage=1 00:22:47.319 --rc genhtml_function_coverage=1 00:22:47.319 --rc genhtml_legend=1 00:22:47.319 --rc geninfo_all_blocks=1 00:22:47.319 --rc geninfo_unexecuted_blocks=1 00:22:47.319 00:22:47.319 ' 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:47.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.319 --rc genhtml_branch_coverage=1 00:22:47.319 --rc genhtml_function_coverage=1 00:22:47.319 --rc genhtml_legend=1 00:22:47.319 --rc geninfo_all_blocks=1 00:22:47.319 --rc geninfo_unexecuted_blocks=1 00:22:47.319 00:22:47.319 ' 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:47.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.319 --rc genhtml_branch_coverage=1 00:22:47.319 --rc genhtml_function_coverage=1 00:22:47.319 --rc genhtml_legend=1 00:22:47.319 --rc geninfo_all_blocks=1 00:22:47.319 --rc geninfo_unexecuted_blocks=1 00:22:47.319 00:22:47.319 ' 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:47.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.319 --rc genhtml_branch_coverage=1 00:22:47.319 --rc genhtml_function_coverage=1 00:22:47.319 --rc genhtml_legend=1 00:22:47.319 --rc geninfo_all_blocks=1 00:22:47.319 --rc geninfo_unexecuted_blocks=1 00:22:47.319 00:22:47.319 ' 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:47.319 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:47.581 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:47.581 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:47.581 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:47.581 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.581 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.581 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.581 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:47.581 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.581 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:47.581 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:47.581 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:47.581 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:47.581 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:47.581 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:47.581 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:47.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:47.581 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:47.581 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:47.581 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:47.581 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:47.581 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:47.581 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:47.581 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:47.581 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:47.581 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:47.581 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:47.581 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:47.581 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.581 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.581 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.581 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:47.581 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:47.581 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:47.581 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:55.731 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:55.731 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:55.731 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:55.731 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:55.731 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:55.731 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:55.731 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:55.731 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:55.731 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:55.731 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:55.731 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:55.731 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:55.731 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:55.731 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:55.731 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:55.731 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:55.731 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:55.731 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:55.731 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:55.731 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:55.732 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:55.732 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:55.732 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:55.732 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:55.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:55.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.534 ms 00:22:55.732 00:22:55.732 --- 10.0.0.2 ping statistics --- 00:22:55.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.732 rtt min/avg/max/mdev = 0.534/0.534/0.534/0.000 ms 00:22:55.732 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:55.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:55.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:22:55.732 00:22:55.732 --- 10.0.0.1 ping statistics --- 00:22:55.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.732 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:22:55.732 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:55.732 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:55.732 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:55.732 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:55.732 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:55.732 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:55.732 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:55.732 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:55.732 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:55.732 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:55.732 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:55.732 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:55.732 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:55.732 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2401017 00:22:55.732 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2401017 00:22:55.732 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:55.733 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2401017 ']' 00:22:55.733 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.733 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:55.733 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.733 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:55.733 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:55.733 [2024-11-27 07:18:06.124931] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:22:55.733 [2024-11-27 07:18:06.125007] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:55.733 [2024-11-27 07:18:06.234546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:55.733 [2024-11-27 07:18:06.295035] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.733 [2024-11-27 07:18:06.295084] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.733 [2024-11-27 07:18:06.295093] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:55.733 [2024-11-27 07:18:06.295100] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:55.733 [2024-11-27 07:18:06.295106] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.733 [2024-11-27 07:18:06.296646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:55.733 [2024-11-27 07:18:06.296811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:55.733 [2024-11-27 07:18:06.296970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:55.733 [2024-11-27 07:18:06.296970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:55.995 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:55.995 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:22:55.995 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:55.995 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:55.995 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:55.995 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.995 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:55.995 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.995 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:55.995 [2024-11-27 07:18:07.000656] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.995 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.995 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:55.995 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.995 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:55.995 Malloc0 00:22:55.995 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.995 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:55.995 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.995 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:55.995 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.995 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:55.995 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.995 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:55.995 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.995 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:55.995 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.995 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:55.995 [2024-11-27 07:18:07.054418] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.995 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.995 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:55.995 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:55.995 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:55.995 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:55.995 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:55.995 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:55.995 { 00:22:55.995 "params": { 00:22:55.995 "name": "Nvme$subsystem", 00:22:55.995 "trtype": "$TEST_TRANSPORT", 00:22:55.995 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:55.995 "adrfam": "ipv4", 00:22:55.995 "trsvcid": "$NVMF_PORT", 00:22:55.995 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:55.995 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:55.995 "hdgst": ${hdgst:-false}, 00:22:55.995 "ddgst": ${ddgst:-false} 00:22:55.995 }, 00:22:55.995 "method": "bdev_nvme_attach_controller" 00:22:55.995 } 00:22:55.995 EOF 00:22:55.995 )") 00:22:55.995 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:55.995 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:55.995 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:55.995 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:55.995 "params": { 00:22:55.995 "name": "Nvme1", 00:22:55.995 "trtype": "tcp", 00:22:55.995 "traddr": "10.0.0.2", 00:22:55.995 "adrfam": "ipv4", 00:22:55.995 "trsvcid": "4420", 00:22:55.995 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.995 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:55.995 "hdgst": false, 00:22:55.995 "ddgst": false 00:22:55.995 }, 00:22:55.995 "method": "bdev_nvme_attach_controller" 00:22:55.995 }' 00:22:55.995 [2024-11-27 07:18:07.112212] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:22:55.995 [2024-11-27 07:18:07.112285] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2401219 ] 00:22:56.256 [2024-11-27 07:18:07.210557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:56.256 [2024-11-27 07:18:07.270647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.256 [2024-11-27 07:18:07.270806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.256 [2024-11-27 07:18:07.270806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.518 I/O targets: 00:22:56.518 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:56.518 00:22:56.518 00:22:56.518 CUnit - A unit testing framework for C - Version 2.1-3 00:22:56.518 http://cunit.sourceforge.net/ 00:22:56.518 00:22:56.518 00:22:56.518 Suite: bdevio tests on: Nvme1n1 00:22:56.518 Test: blockdev write read block ...passed 00:22:56.518 Test: blockdev write zeroes read block ...passed 00:22:56.518 Test: blockdev write zeroes read no split ...passed 00:22:56.518 Test: blockdev write zeroes read split ...passed 00:22:56.518 Test: blockdev write zeroes read split partial ...passed 00:22:56.518 Test: blockdev reset ...[2024-11-27 07:18:07.630121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:56.518 [2024-11-27 07:18:07.630235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a19810 (9): Bad file descriptor 00:22:56.779 [2024-11-27 07:18:07.734013] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:56.779 passed 00:22:56.779 Test: blockdev write read 8 blocks ...passed 00:22:56.779 Test: blockdev write read size > 128k ...passed 00:22:56.779 Test: blockdev write read invalid size ...passed 00:22:56.779 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:56.779 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:56.779 Test: blockdev write read max offset ...passed 00:22:56.779 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:56.779 Test: blockdev writev readv 8 blocks ...passed 00:22:56.779 Test: blockdev writev readv 30 x 1block ...passed 00:22:56.779 Test: blockdev writev readv block ...passed 00:22:57.041 Test: blockdev writev readv size > 128k ...passed 00:22:57.041 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:57.041 Test: blockdev comparev and writev ...[2024-11-27 07:18:08.001631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:57.041 [2024-11-27 07:18:08.001681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.041 [2024-11-27 07:18:08.001699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:57.041 [2024-11-27 07:18:08.001708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.041 [2024-11-27 07:18:08.002258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:57.041 [2024-11-27 07:18:08.002271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:57.041 [2024-11-27 07:18:08.002285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:57.041 [2024-11-27 07:18:08.002292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:57.041 [2024-11-27 07:18:08.002846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:57.041 [2024-11-27 07:18:08.002860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:57.041 [2024-11-27 07:18:08.002873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:57.041 [2024-11-27 07:18:08.002881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:57.041 [2024-11-27 07:18:08.003383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:57.041 [2024-11-27 07:18:08.003395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:57.041 [2024-11-27 07:18:08.003409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:57.041 [2024-11-27 07:18:08.003417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:57.041 passed 00:22:57.041 Test: blockdev nvme passthru rw ...passed 00:22:57.042 Test: blockdev nvme passthru vendor specific ...[2024-11-27 07:18:08.087042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:57.042 [2024-11-27 07:18:08.087058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:57.042 [2024-11-27 07:18:08.087463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:57.042 [2024-11-27 07:18:08.087475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:57.042 [2024-11-27 07:18:08.087859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:57.042 [2024-11-27 07:18:08.087870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:57.042 [2024-11-27 07:18:08.088251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:57.042 [2024-11-27 07:18:08.088265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:57.042 passed 00:22:57.042 Test: blockdev nvme admin passthru ...passed 00:22:57.042 Test: blockdev copy ...passed 00:22:57.042 00:22:57.042 Run Summary: Type Total Ran Passed Failed Inactive 00:22:57.042 suites 1 1 n/a 0 0 00:22:57.042 tests 23 23 23 0 0 00:22:57.042 asserts 152 152 152 0 n/a 00:22:57.042 00:22:57.042 Elapsed time = 1.308 seconds 00:22:57.302 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:57.302 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.302 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:57.302 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.303 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:57.303 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:57.303 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:57.303 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:57.303 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:57.303 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:57.303 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:57.303 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:57.303 rmmod nvme_tcp 00:22:57.563 rmmod nvme_fabrics 00:22:57.563 rmmod nvme_keyring 00:22:57.563 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:57.563 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:57.563 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:57.564 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2401017 ']' 00:22:57.564 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2401017 00:22:57.564 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2401017 ']' 00:22:57.564 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2401017 00:22:57.564 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:22:57.564 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:57.564 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2401017 00:22:57.564 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:22:57.564 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:22:57.564 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2401017' 00:22:57.564 killing process with pid 2401017 00:22:57.564 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2401017 00:22:57.564 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2401017 00:22:57.824 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:57.824 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:57.824 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:57.824 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:57.824 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:57.824 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:57.824 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:57.824 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:57.824 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:57.824 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.824 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:57.824 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.825 07:18:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:59.825 00:22:59.825 real 0m12.698s 00:22:59.825 user 0m14.915s 00:22:59.825 sys 0m6.764s 00:22:59.825 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:59.825 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:59.825 ************************************ 00:22:59.825 END TEST nvmf_bdevio_no_huge 00:22:59.825 ************************************ 00:23:00.087 07:18:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:00.087 07:18:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:00.087 07:18:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:00.087 07:18:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:00.087 ************************************ 00:23:00.087 START TEST nvmf_tls 00:23:00.087 ************************************ 00:23:00.087 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:00.087 * Looking for test storage... 00:23:00.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:00.087 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:00.087 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:23:00.087 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:00.087 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:00.087 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:00.087 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:00.087 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:00.087 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:23:00.087 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:23:00.087 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:23:00.087 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:23:00.087 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:23:00.087 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:23:00.087 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:23:00.087 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:00.087 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:23:00.087 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:23:00.087 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:00.087 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:00.087 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:23:00.087 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:23:00.087 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:00.087 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:00.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.349 --rc genhtml_branch_coverage=1 00:23:00.349 --rc genhtml_function_coverage=1 00:23:00.349 --rc genhtml_legend=1 00:23:00.349 --rc geninfo_all_blocks=1 00:23:00.349 --rc geninfo_unexecuted_blocks=1 00:23:00.349 00:23:00.349 ' 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:00.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.349 --rc genhtml_branch_coverage=1 00:23:00.349 --rc genhtml_function_coverage=1 00:23:00.349 --rc genhtml_legend=1 00:23:00.349 --rc geninfo_all_blocks=1 00:23:00.349 --rc geninfo_unexecuted_blocks=1 00:23:00.349 00:23:00.349 ' 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:00.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.349 --rc genhtml_branch_coverage=1 00:23:00.349 --rc genhtml_function_coverage=1 00:23:00.349 --rc genhtml_legend=1 00:23:00.349 --rc geninfo_all_blocks=1 00:23:00.349 --rc geninfo_unexecuted_blocks=1 00:23:00.349 00:23:00.349 ' 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:00.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.349 --rc genhtml_branch_coverage=1 00:23:00.349 --rc genhtml_function_coverage=1 00:23:00.349 --rc genhtml_legend=1 00:23:00.349 --rc geninfo_all_blocks=1 00:23:00.349 --rc geninfo_unexecuted_blocks=1 00:23:00.349 00:23:00.349 ' 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:00.349 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:00.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:00.350 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:00.350 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:00.350 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:00.350 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:00.350 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:23:00.350 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:00.350 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:00.350 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:00.350 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:00.350 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:00.350 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.350 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:00.350 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.350 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:00.350 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:00.350 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:23:00.350 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:08.490 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:08.490 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:08.490 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:08.490 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:08.490 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:08.491 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:08.491 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:08.491 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:08.491 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:08.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:08.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:23:08.491 00:23:08.491 --- 10.0.0.2 ping statistics --- 00:23:08.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.491 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:23:08.491 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:08.491 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:08.491 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:23:08.491 00:23:08.491 --- 10.0.0.1 ping statistics --- 00:23:08.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.491 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:23:08.491 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:08.491 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:23:08.491 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:08.491 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:08.491 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:08.491 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:08.491 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:08.491 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:08.491 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:08.491 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:08.491 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:08.491 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:08.491 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.491 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2406250 00:23:08.491 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2406250 00:23:08.491 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:08.491 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2406250 ']' 00:23:08.491 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.491 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:08.491 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.491 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:08.491 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.491 [2024-11-27 07:18:18.941491] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:23:08.491 [2024-11-27 07:18:18.941561] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:08.491 [2024-11-27 07:18:19.045559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.491 [2024-11-27 07:18:19.096973] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:08.491 [2024-11-27 07:18:19.097028] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:08.491 [2024-11-27 07:18:19.097036] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:08.491 [2024-11-27 07:18:19.097043] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:08.491 [2024-11-27 07:18:19.097051] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:08.491 [2024-11-27 07:18:19.097847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:08.752 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:08.752 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:08.752 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:08.752 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:08.752 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.752 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.752 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:23:08.752 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:09.013 true 00:23:09.013 07:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:09.013 07:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:23:09.013 07:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:23:09.013 07:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:23:09.013 07:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:09.274 07:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:09.274 07:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:23:09.534 07:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:23:09.534 07:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:23:09.534 07:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:09.795 07:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:09.795 07:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:23:09.795 07:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:23:09.795 07:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:23:09.795 07:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:09.795 07:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:23:10.055 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:23:10.055 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:23:10.055 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:10.316 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:10.316 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:23:10.316 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:23:10.316 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:23:10.316 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:10.576 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:10.576 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:23:10.837 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:23:10.837 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:23:10.837 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:10.837 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:10.837 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:10.837 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:10.837 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:23:10.837 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:23:10.837 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:10.837 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:10.837 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:10.837 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:10.837 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:10.837 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:10.837 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:23:10.837 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:23:10.837 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:10.837 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:10.837 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:10.837 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.pTCyFrwfCs 00:23:10.837 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:23:10.837 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.J5wOeTAn61 00:23:10.837 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:10.837 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:10.837 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.pTCyFrwfCs 00:23:10.837 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.J5wOeTAn61 00:23:10.837 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:11.098 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:11.358 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.pTCyFrwfCs 00:23:11.358 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.pTCyFrwfCs 00:23:11.358 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:11.358 [2024-11-27 07:18:22.496893] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:11.358 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:11.619 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:11.880 [2024-11-27 07:18:22.829705] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:11.880 [2024-11-27 07:18:22.829918] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.880 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:11.880 malloc0 00:23:11.880 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:12.141 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.pTCyFrwfCs 00:23:12.141 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:12.402 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.pTCyFrwfCs 00:23:24.634 Initializing NVMe Controllers 00:23:24.634 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:24.634 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:24.634 Initialization complete. Launching workers. 00:23:24.634 ======================================================== 00:23:24.634 Latency(us) 00:23:24.634 Device Information : IOPS MiB/s Average min max 00:23:24.634 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18735.96 73.19 3416.10 1164.52 4491.97 00:23:24.634 ======================================================== 00:23:24.634 Total : 18735.96 73.19 3416.10 1164.52 4491.97 00:23:24.634 00:23:24.634 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pTCyFrwfCs 00:23:24.634 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:24.634 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:24.634 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:24.634 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.pTCyFrwfCs 00:23:24.634 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:24.634 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2409092 00:23:24.634 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:24.634 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2409092 /var/tmp/bdevperf.sock 00:23:24.634 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:24.634 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2409092 ']' 00:23:24.634 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:24.634 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:24.634 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:24.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:24.634 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:24.634 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.634 [2024-11-27 07:18:33.687552] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:23:24.634 [2024-11-27 07:18:33.687609] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2409092 ] 00:23:24.634 [2024-11-27 07:18:33.775048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.635 [2024-11-27 07:18:33.810243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.635 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:24.635 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:24.635 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pTCyFrwfCs 00:23:24.635 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:24.635 [2024-11-27 07:18:34.778959] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:24.635 TLSTESTn1 00:23:24.635 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:24.635 Running I/O for 10 seconds... 00:23:25.842 5503.00 IOPS, 21.50 MiB/s [2024-11-27T06:18:37.989Z] 5267.50 IOPS, 20.58 MiB/s [2024-11-27T06:18:39.369Z] 5127.67 IOPS, 20.03 MiB/s [2024-11-27T06:18:40.310Z] 5016.50 IOPS, 19.60 MiB/s [2024-11-27T06:18:41.253Z] 5291.60 IOPS, 20.67 MiB/s [2024-11-27T06:18:42.196Z] 5395.50 IOPS, 21.08 MiB/s [2024-11-27T06:18:43.138Z] 5532.29 IOPS, 21.61 MiB/s [2024-11-27T06:18:44.079Z] 5525.00 IOPS, 21.58 MiB/s [2024-11-27T06:18:45.021Z] 5554.78 IOPS, 21.70 MiB/s [2024-11-27T06:18:45.021Z] 5602.10 IOPS, 21.88 MiB/s 00:23:33.816 Latency(us) 00:23:33.816 [2024-11-27T06:18:45.021Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.816 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:33.816 Verification LBA range: start 0x0 length 0x2000 00:23:33.816 TLSTESTn1 : 10.02 5604.41 21.89 0.00 0.00 22800.76 6034.77 38229.33 00:23:33.816 [2024-11-27T06:18:45.021Z] =================================================================================================================== 00:23:33.816 [2024-11-27T06:18:45.021Z] Total : 5604.41 21.89 0.00 0.00 22800.76 6034.77 38229.33 00:23:33.816 { 00:23:33.816 "results": [ 00:23:33.816 { 00:23:33.816 "job": "TLSTESTn1", 00:23:33.816 "core_mask": "0x4", 00:23:33.816 "workload": "verify", 00:23:33.816 "status": "finished", 00:23:33.816 "verify_range": { 00:23:33.816 "start": 0, 00:23:33.816 "length": 8192 00:23:33.816 }, 00:23:33.816 "queue_depth": 128, 00:23:33.816 "io_size": 4096, 00:23:33.816 "runtime": 10.018352, 00:23:33.816 "iops": 5604.4147779994155, 00:23:33.816 "mibps": 21.892245226560217, 00:23:33.816 "io_failed": 0, 00:23:33.816 "io_timeout": 0, 00:23:33.816 "avg_latency_us": 22800.76027095541, 00:23:33.816 "min_latency_us": 6034.7733333333335, 00:23:33.816 "max_latency_us": 38229.333333333336 00:23:33.816 } 00:23:33.816 ], 00:23:33.816 "core_count": 1 00:23:33.816 } 00:23:34.077 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:34.077 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2409092 00:23:34.077 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2409092 ']' 00:23:34.077 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2409092 00:23:34.077 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:34.077 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:34.077 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2409092 00:23:34.077 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:34.077 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:34.077 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2409092' 00:23:34.077 killing process with pid 2409092 00:23:34.077 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2409092 00:23:34.077 Received shutdown signal, test time was about 10.000000 seconds 00:23:34.077 00:23:34.077 Latency(us) 00:23:34.077 [2024-11-27T06:18:45.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.077 [2024-11-27T06:18:45.282Z] =================================================================================================================== 00:23:34.077 [2024-11-27T06:18:45.282Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:34.077 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2409092 00:23:34.077 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.J5wOeTAn61 00:23:34.077 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:34.077 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.J5wOeTAn61 00:23:34.077 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:34.077 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:34.077 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:34.077 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:34.077 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.J5wOeTAn61 00:23:34.077 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:34.077 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:34.077 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:34.077 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.J5wOeTAn61 00:23:34.077 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:34.077 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2411430 00:23:34.077 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:34.077 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2411430 /var/tmp/bdevperf.sock 00:23:34.077 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:34.077 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2411430 ']' 00:23:34.077 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:34.077 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:34.077 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:34.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:34.077 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:34.077 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.077 [2024-11-27 07:18:45.250064] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:23:34.077 [2024-11-27 07:18:45.250122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2411430 ] 00:23:34.337 [2024-11-27 07:18:45.331296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.337 [2024-11-27 07:18:45.359884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:34.907 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:34.907 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:34.907 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.J5wOeTAn61 00:23:35.168 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:35.429 [2024-11-27 07:18:46.387871] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:35.429 [2024-11-27 07:18:46.392659] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:35.429 [2024-11-27 07:18:46.393001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x805be0 (107): Transport endpoint is not connected 00:23:35.429 [2024-11-27 07:18:46.393996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x805be0 (9): Bad file descriptor 00:23:35.429 [2024-11-27 07:18:46.394997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:35.429 [2024-11-27 07:18:46.395005] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:35.429 [2024-11-27 07:18:46.395010] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:35.429 [2024-11-27 07:18:46.395017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:35.429 request: 00:23:35.429 { 00:23:35.429 "name": "TLSTEST", 00:23:35.429 "trtype": "tcp", 00:23:35.429 "traddr": "10.0.0.2", 00:23:35.429 "adrfam": "ipv4", 00:23:35.429 "trsvcid": "4420", 00:23:35.429 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.429 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:35.429 "prchk_reftag": false, 00:23:35.429 "prchk_guard": false, 00:23:35.429 "hdgst": false, 00:23:35.429 "ddgst": false, 00:23:35.429 "psk": "key0", 00:23:35.429 "allow_unrecognized_csi": false, 00:23:35.429 "method": "bdev_nvme_attach_controller", 00:23:35.429 "req_id": 1 00:23:35.429 } 00:23:35.429 Got JSON-RPC error response 00:23:35.429 response: 00:23:35.429 { 00:23:35.429 "code": -5, 00:23:35.429 "message": "Input/output error" 00:23:35.429 } 00:23:35.429 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2411430 00:23:35.429 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2411430 ']' 00:23:35.429 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2411430 00:23:35.429 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:35.429 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:35.429 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2411430 00:23:35.429 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:35.429 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:35.429 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2411430' 00:23:35.429 killing process with pid 2411430 00:23:35.429 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2411430 00:23:35.429 Received shutdown signal, test time was about 10.000000 seconds 00:23:35.429 00:23:35.429 Latency(us) 00:23:35.429 [2024-11-27T06:18:46.634Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.429 [2024-11-27T06:18:46.634Z] =================================================================================================================== 00:23:35.429 [2024-11-27T06:18:46.634Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:35.429 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2411430 00:23:35.429 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:35.429 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:35.429 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:35.429 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:35.430 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:35.430 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.pTCyFrwfCs 00:23:35.430 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:35.430 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.pTCyFrwfCs 00:23:35.430 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:35.430 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:35.430 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:35.430 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:35.430 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.pTCyFrwfCs 00:23:35.430 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:35.430 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:35.430 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:35.430 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.pTCyFrwfCs 00:23:35.430 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:35.430 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2411622 00:23:35.430 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:35.430 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2411622 /var/tmp/bdevperf.sock 00:23:35.430 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:35.430 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2411622 ']' 00:23:35.430 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:35.430 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:35.430 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:35.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:35.430 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:35.430 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.430 [2024-11-27 07:18:46.625527] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:23:35.430 [2024-11-27 07:18:46.625585] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2411622 ] 00:23:35.691 [2024-11-27 07:18:46.706822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.691 [2024-11-27 07:18:46.735929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:36.262 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:36.262 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:36.262 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pTCyFrwfCs 00:23:36.523 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:36.784 [2024-11-27 07:18:47.735830] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:36.784 [2024-11-27 07:18:47.745851] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:36.784 [2024-11-27 07:18:47.745871] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:36.784 [2024-11-27 07:18:47.745890] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:36.784 [2024-11-27 07:18:47.745980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1321be0 (107): Transport endpoint is not connected 00:23:36.784 [2024-11-27 07:18:47.746968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1321be0 (9): Bad file descriptor 00:23:36.784 [2024-11-27 07:18:47.747970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:36.784 [2024-11-27 07:18:47.747978] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:36.784 [2024-11-27 07:18:47.747983] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:36.784 [2024-11-27 07:18:47.747990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:36.784 request: 00:23:36.784 { 00:23:36.784 "name": "TLSTEST", 00:23:36.784 "trtype": "tcp", 00:23:36.784 "traddr": "10.0.0.2", 00:23:36.784 "adrfam": "ipv4", 00:23:36.784 "trsvcid": "4420", 00:23:36.784 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.784 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:36.784 "prchk_reftag": false, 00:23:36.784 "prchk_guard": false, 00:23:36.784 "hdgst": false, 00:23:36.784 "ddgst": false, 00:23:36.784 "psk": "key0", 00:23:36.784 "allow_unrecognized_csi": false, 00:23:36.784 "method": "bdev_nvme_attach_controller", 00:23:36.784 "req_id": 1 00:23:36.784 } 00:23:36.784 Got JSON-RPC error response 00:23:36.784 response: 00:23:36.784 { 00:23:36.784 "code": -5, 00:23:36.784 "message": "Input/output error" 00:23:36.784 } 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2411622 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2411622 ']' 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2411622 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2411622 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2411622' 00:23:36.784 killing process with pid 2411622 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2411622 00:23:36.784 Received shutdown signal, test time was about 10.000000 seconds 00:23:36.784 00:23:36.784 Latency(us) 00:23:36.784 [2024-11-27T06:18:47.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.784 [2024-11-27T06:18:47.989Z] =================================================================================================================== 00:23:36.784 [2024-11-27T06:18:47.989Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2411622 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.pTCyFrwfCs 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.pTCyFrwfCs 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.pTCyFrwfCs 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.pTCyFrwfCs 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2411807 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2411807 /var/tmp/bdevperf.sock 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2411807 ']' 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:36.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:36.784 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.046 [2024-11-27 07:18:47.995294] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:23:37.046 [2024-11-27 07:18:47.995347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2411807 ] 00:23:37.046 [2024-11-27 07:18:48.079096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.046 [2024-11-27 07:18:48.106899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:37.619 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:37.619 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:37.619 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pTCyFrwfCs 00:23:37.880 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:38.141 [2024-11-27 07:18:49.118696] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:38.141 [2024-11-27 07:18:49.129210] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:38.141 [2024-11-27 07:18:49.129229] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:38.141 [2024-11-27 07:18:49.129249] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:38.141 [2024-11-27 07:18:49.129920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218be0 (107): Transport endpoint is not connected 00:23:38.141 [2024-11-27 07:18:49.130916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218be0 (9): Bad file descriptor 00:23:38.141 [2024-11-27 07:18:49.131918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:23:38.141 [2024-11-27 07:18:49.131925] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:38.141 [2024-11-27 07:18:49.131932] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:38.141 [2024-11-27 07:18:49.131938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:23:38.141 request: 00:23:38.141 { 00:23:38.141 "name": "TLSTEST", 00:23:38.141 "trtype": "tcp", 00:23:38.141 "traddr": "10.0.0.2", 00:23:38.141 "adrfam": "ipv4", 00:23:38.141 "trsvcid": "4420", 00:23:38.141 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:38.141 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:38.141 "prchk_reftag": false, 00:23:38.141 "prchk_guard": false, 00:23:38.141 "hdgst": false, 00:23:38.141 "ddgst": false, 00:23:38.141 "psk": "key0", 00:23:38.141 "allow_unrecognized_csi": false, 00:23:38.141 "method": "bdev_nvme_attach_controller", 00:23:38.141 "req_id": 1 00:23:38.141 } 00:23:38.141 Got JSON-RPC error response 00:23:38.141 response: 00:23:38.141 { 00:23:38.141 "code": -5, 00:23:38.141 "message": "Input/output error" 00:23:38.141 } 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2411807 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2411807 ']' 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2411807 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2411807 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2411807' 00:23:38.141 killing process with pid 2411807 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2411807 00:23:38.141 Received shutdown signal, test time was about 10.000000 seconds 00:23:38.141 00:23:38.141 Latency(us) 00:23:38.141 [2024-11-27T06:18:49.346Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.141 [2024-11-27T06:18:49.346Z] =================================================================================================================== 00:23:38.141 [2024-11-27T06:18:49.346Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2411807 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2412141 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2412141 /var/tmp/bdevperf.sock 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2412141 ']' 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:38.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:38.141 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.401 [2024-11-27 07:18:49.377243] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:23:38.401 [2024-11-27 07:18:49.377299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2412141 ] 00:23:38.401 [2024-11-27 07:18:49.462809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.401 [2024-11-27 07:18:49.490626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.468 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:39.468 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:39.468 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:39.468 [2024-11-27 07:18:50.329837] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:39.468 [2024-11-27 07:18:50.329869] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:39.468 request: 00:23:39.468 { 00:23:39.468 "name": "key0", 00:23:39.468 "path": "", 00:23:39.468 "method": "keyring_file_add_key", 00:23:39.468 "req_id": 1 00:23:39.468 } 00:23:39.468 Got JSON-RPC error response 00:23:39.468 response: 00:23:39.468 { 00:23:39.468 "code": -1, 00:23:39.468 "message": "Operation not permitted" 00:23:39.468 } 00:23:39.468 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:39.468 [2024-11-27 07:18:50.506363] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:39.468 [2024-11-27 07:18:50.506386] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:39.468 request: 00:23:39.468 { 00:23:39.468 "name": "TLSTEST", 00:23:39.468 "trtype": "tcp", 00:23:39.468 "traddr": "10.0.0.2", 00:23:39.468 "adrfam": "ipv4", 00:23:39.468 "trsvcid": "4420", 00:23:39.468 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.468 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:39.468 "prchk_reftag": false, 00:23:39.468 "prchk_guard": false, 00:23:39.468 "hdgst": false, 00:23:39.468 "ddgst": false, 00:23:39.468 "psk": "key0", 00:23:39.468 "allow_unrecognized_csi": false, 00:23:39.468 "method": "bdev_nvme_attach_controller", 00:23:39.468 "req_id": 1 00:23:39.468 } 00:23:39.468 Got JSON-RPC error response 00:23:39.468 response: 00:23:39.468 { 00:23:39.468 "code": -126, 00:23:39.468 "message": "Required key not available" 00:23:39.468 } 00:23:39.468 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2412141 00:23:39.468 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2412141 ']' 00:23:39.468 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2412141 00:23:39.468 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:39.468 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:39.468 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2412141 00:23:39.468 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:39.468 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:39.468 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2412141' 00:23:39.468 killing process with pid 2412141 00:23:39.468 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2412141 00:23:39.468 Received shutdown signal, test time was about 10.000000 seconds 00:23:39.468 00:23:39.468 Latency(us) 00:23:39.468 [2024-11-27T06:18:50.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.468 [2024-11-27T06:18:50.673Z] =================================================================================================================== 00:23:39.468 [2024-11-27T06:18:50.673Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:39.468 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2412141 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2406250 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2406250 ']' 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2406250 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2406250 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2406250' 00:23:39.775 killing process with pid 2406250 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2406250 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2406250 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.8HJ5SGooEZ 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.8HJ5SGooEZ 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2412501 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2412501 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2412501 ']' 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:39.775 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.036 [2024-11-27 07:18:50.984434] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:23:40.036 [2024-11-27 07:18:50.984490] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:40.036 [2024-11-27 07:18:51.076216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.036 [2024-11-27 07:18:51.105575] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:40.036 [2024-11-27 07:18:51.105605] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:40.036 [2024-11-27 07:18:51.105611] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:40.036 [2024-11-27 07:18:51.105616] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:40.036 [2024-11-27 07:18:51.105620] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:40.036 [2024-11-27 07:18:51.106079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:40.606 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:40.606 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:40.606 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:40.606 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:40.606 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.867 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:40.867 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.8HJ5SGooEZ 00:23:40.867 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.8HJ5SGooEZ 00:23:40.867 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:40.867 [2024-11-27 07:18:51.982417] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.867 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:41.127 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:41.389 [2024-11-27 07:18:52.339294] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:41.389 [2024-11-27 07:18:52.339496] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.389 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:41.389 malloc0 00:23:41.389 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:41.649 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.8HJ5SGooEZ 00:23:41.910 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:41.910 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8HJ5SGooEZ 00:23:41.910 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:41.910 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:41.910 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:41.910 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.8HJ5SGooEZ 00:23:41.910 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:41.910 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:41.910 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2412869 00:23:41.910 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:41.910 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2412869 /var/tmp/bdevperf.sock 00:23:41.910 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2412869 ']' 00:23:41.910 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:41.910 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:41.910 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:41.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:41.910 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:41.910 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.910 [2024-11-27 07:18:53.084830] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:23:41.910 [2024-11-27 07:18:53.084875] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2412869 ] 00:23:42.179 [2024-11-27 07:18:53.161215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.179 [2024-11-27 07:18:53.190186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:42.179 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:42.179 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:42.179 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.8HJ5SGooEZ 00:23:42.440 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:42.440 [2024-11-27 07:18:53.604513] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:42.702 TLSTESTn1 00:23:42.702 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:42.702 Running I/O for 10 seconds... 00:23:44.589 5142.00 IOPS, 20.09 MiB/s [2024-11-27T06:18:57.178Z] 5444.50 IOPS, 21.27 MiB/s [2024-11-27T06:18:58.120Z] 5700.00 IOPS, 22.27 MiB/s [2024-11-27T06:18:59.064Z] 5707.25 IOPS, 22.29 MiB/s [2024-11-27T06:19:00.008Z] 5844.80 IOPS, 22.83 MiB/s [2024-11-27T06:19:00.952Z] 5923.67 IOPS, 23.14 MiB/s [2024-11-27T06:19:01.895Z] 5917.71 IOPS, 23.12 MiB/s [2024-11-27T06:19:02.838Z] 5873.88 IOPS, 22.94 MiB/s [2024-11-27T06:19:04.221Z] 5883.56 IOPS, 22.98 MiB/s [2024-11-27T06:19:04.221Z] 5877.90 IOPS, 22.96 MiB/s 00:23:53.016 Latency(us) 00:23:53.016 [2024-11-27T06:19:04.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.016 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:53.016 Verification LBA range: start 0x0 length 0x2000 00:23:53.016 TLSTESTn1 : 10.01 5882.66 22.98 0.00 0.00 21728.59 5024.43 97867.09 00:23:53.016 [2024-11-27T06:19:04.221Z] =================================================================================================================== 00:23:53.016 [2024-11-27T06:19:04.221Z] Total : 5882.66 22.98 0.00 0.00 21728.59 5024.43 97867.09 00:23:53.016 { 00:23:53.016 "results": [ 00:23:53.016 { 00:23:53.016 "job": "TLSTESTn1", 00:23:53.016 "core_mask": "0x4", 00:23:53.016 "workload": "verify", 00:23:53.016 "status": "finished", 00:23:53.016 "verify_range": { 00:23:53.016 "start": 0, 00:23:53.016 "length": 8192 00:23:53.016 }, 00:23:53.016 "queue_depth": 128, 00:23:53.016 "io_size": 4096, 00:23:53.016 "runtime": 10.013323, 00:23:53.016 "iops": 5882.6625287129955, 00:23:53.016 "mibps": 22.97915050278514, 00:23:53.016 "io_failed": 0, 00:23:53.016 "io_timeout": 0, 00:23:53.016 "avg_latency_us": 21728.5944939592, 00:23:53.016 "min_latency_us": 5024.426666666666, 00:23:53.016 "max_latency_us": 97867.09333333334 00:23:53.016 } 00:23:53.016 ], 00:23:53.016 "core_count": 1 00:23:53.016 } 00:23:53.016 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:53.016 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2412869 00:23:53.016 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2412869 ']' 00:23:53.016 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2412869 00:23:53.016 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:53.016 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:53.016 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2412869 00:23:53.017 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:53.017 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:53.017 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2412869' 00:23:53.017 killing process with pid 2412869 00:23:53.017 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2412869 00:23:53.017 Received shutdown signal, test time was about 10.000000 seconds 00:23:53.017 00:23:53.017 Latency(us) 00:23:53.017 [2024-11-27T06:19:04.222Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.017 [2024-11-27T06:19:04.222Z] =================================================================================================================== 00:23:53.017 [2024-11-27T06:19:04.222Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:53.017 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2412869 00:23:53.017 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.8HJ5SGooEZ 00:23:53.017 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8HJ5SGooEZ 00:23:53.017 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:53.017 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8HJ5SGooEZ 00:23:53.017 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:53.017 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:53.017 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:53.017 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:53.017 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8HJ5SGooEZ 00:23:53.017 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:53.017 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:53.017 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:53.017 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.8HJ5SGooEZ 00:23:53.017 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:53.017 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2415093 00:23:53.017 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:53.017 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2415093 /var/tmp/bdevperf.sock 00:23:53.017 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:53.017 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2415093 ']' 00:23:53.017 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:53.017 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:53.017 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:53.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:53.017 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:53.017 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.017 [2024-11-27 07:19:04.071096] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:23:53.017 [2024-11-27 07:19:04.071155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2415093 ] 00:23:53.017 [2024-11-27 07:19:04.159068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.017 [2024-11-27 07:19:04.188288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:53.958 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:53.958 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:53.958 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.8HJ5SGooEZ 00:23:53.958 [2024-11-27 07:19:05.011753] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.8HJ5SGooEZ': 0100666 00:23:53.958 [2024-11-27 07:19:05.011774] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:53.958 request: 00:23:53.958 { 00:23:53.958 "name": "key0", 00:23:53.958 "path": "/tmp/tmp.8HJ5SGooEZ", 00:23:53.958 "method": "keyring_file_add_key", 00:23:53.958 "req_id": 1 00:23:53.958 } 00:23:53.958 Got JSON-RPC error response 00:23:53.958 response: 00:23:53.958 { 00:23:53.958 "code": -1, 00:23:53.958 "message": "Operation not permitted" 00:23:53.958 } 00:23:53.958 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:54.219 [2024-11-27 07:19:05.196281] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:54.219 [2024-11-27 07:19:05.196303] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:54.219 request: 00:23:54.219 { 00:23:54.219 "name": "TLSTEST", 00:23:54.219 "trtype": "tcp", 00:23:54.219 "traddr": "10.0.0.2", 00:23:54.219 "adrfam": "ipv4", 00:23:54.219 "trsvcid": "4420", 00:23:54.219 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:54.219 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:54.219 "prchk_reftag": false, 00:23:54.219 "prchk_guard": false, 00:23:54.219 "hdgst": false, 00:23:54.219 "ddgst": false, 00:23:54.219 "psk": "key0", 00:23:54.219 "allow_unrecognized_csi": false, 00:23:54.219 "method": "bdev_nvme_attach_controller", 00:23:54.219 "req_id": 1 00:23:54.219 } 00:23:54.219 Got JSON-RPC error response 00:23:54.219 response: 00:23:54.219 { 00:23:54.219 "code": -126, 00:23:54.219 "message": "Required key not available" 00:23:54.219 } 00:23:54.219 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2415093 00:23:54.219 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2415093 ']' 00:23:54.219 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2415093 00:23:54.219 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:54.219 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:54.219 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2415093 00:23:54.219 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:54.219 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:54.219 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2415093' 00:23:54.219 killing process with pid 2415093 00:23:54.219 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2415093 00:23:54.219 Received shutdown signal, test time was about 10.000000 seconds 00:23:54.219 00:23:54.219 Latency(us) 00:23:54.219 [2024-11-27T06:19:05.424Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.219 [2024-11-27T06:19:05.424Z] =================================================================================================================== 00:23:54.219 [2024-11-27T06:19:05.424Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:54.219 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2415093 00:23:54.219 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:54.219 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:54.220 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:54.220 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:54.220 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:54.220 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2412501 00:23:54.220 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2412501 ']' 00:23:54.220 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2412501 00:23:54.220 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:54.220 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:54.220 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2412501 00:23:54.480 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:54.480 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:54.480 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2412501' 00:23:54.480 killing process with pid 2412501 00:23:54.480 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2412501 00:23:54.480 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2412501 00:23:54.480 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:54.481 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:54.481 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:54.481 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.481 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2415362 00:23:54.481 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2415362 00:23:54.481 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:54.481 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2415362 ']' 00:23:54.481 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.481 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:54.481 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.481 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:54.481 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.481 [2024-11-27 07:19:05.627685] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:23:54.481 [2024-11-27 07:19:05.627766] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:54.741 [2024-11-27 07:19:05.721949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.741 [2024-11-27 07:19:05.755949] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:54.741 [2024-11-27 07:19:05.755981] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:54.741 [2024-11-27 07:19:05.755986] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:54.741 [2024-11-27 07:19:05.755991] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:54.741 [2024-11-27 07:19:05.755995] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:54.741 [2024-11-27 07:19:05.756501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:55.311 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:55.311 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:55.312 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:55.312 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:55.312 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.312 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:55.312 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.8HJ5SGooEZ 00:23:55.312 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:55.312 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.8HJ5SGooEZ 00:23:55.312 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:23:55.312 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:55.312 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:23:55.312 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:55.312 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.8HJ5SGooEZ 00:23:55.312 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.8HJ5SGooEZ 00:23:55.312 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:55.572 [2024-11-27 07:19:06.631643] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:55.572 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:55.833 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:55.833 [2024-11-27 07:19:06.992525] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:55.833 [2024-11-27 07:19:06.992715] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:55.833 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:56.094 malloc0 00:23:56.094 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:56.355 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.8HJ5SGooEZ 00:23:56.355 [2024-11-27 07:19:07.531401] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.8HJ5SGooEZ': 0100666 00:23:56.355 [2024-11-27 07:19:07.531421] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:56.355 request: 00:23:56.355 { 00:23:56.355 "name": "key0", 00:23:56.355 "path": "/tmp/tmp.8HJ5SGooEZ", 00:23:56.355 "method": "keyring_file_add_key", 00:23:56.355 "req_id": 1 00:23:56.355 } 00:23:56.355 Got JSON-RPC error response 00:23:56.355 response: 00:23:56.355 { 00:23:56.355 "code": -1, 00:23:56.355 "message": "Operation not permitted" 00:23:56.355 } 00:23:56.615 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:56.615 [2024-11-27 07:19:07.715876] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:56.615 [2024-11-27 07:19:07.715901] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:56.615 request: 00:23:56.615 { 00:23:56.615 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.615 "host": "nqn.2016-06.io.spdk:host1", 00:23:56.615 "psk": "key0", 00:23:56.615 "method": "nvmf_subsystem_add_host", 00:23:56.615 "req_id": 1 00:23:56.615 } 00:23:56.615 Got JSON-RPC error response 00:23:56.615 response: 00:23:56.615 { 00:23:56.615 "code": -32603, 00:23:56.615 "message": "Internal error" 00:23:56.615 } 00:23:56.615 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:56.615 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:56.615 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:56.615 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:56.615 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2415362 00:23:56.615 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2415362 ']' 00:23:56.615 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2415362 00:23:56.615 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:56.615 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:56.615 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2415362 00:23:56.615 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:56.616 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:56.616 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2415362' 00:23:56.616 killing process with pid 2415362 00:23:56.616 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2415362 00:23:56.616 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2415362 00:23:56.876 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.8HJ5SGooEZ 00:23:56.876 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:56.876 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:56.876 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:56.876 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.876 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2415921 00:23:56.876 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2415921 00:23:56.876 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:56.876 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2415921 ']' 00:23:56.876 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.876 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:56.876 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.876 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:56.876 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.876 [2024-11-27 07:19:07.979903] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:23:56.876 [2024-11-27 07:19:07.979957] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:56.876 [2024-11-27 07:19:08.071866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.137 [2024-11-27 07:19:08.101571] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.137 [2024-11-27 07:19:08.101600] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.137 [2024-11-27 07:19:08.101606] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:57.137 [2024-11-27 07:19:08.101610] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:57.137 [2024-11-27 07:19:08.101614] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.137 [2024-11-27 07:19:08.102025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.708 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:57.708 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:57.708 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:57.708 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:57.708 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.708 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.708 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.8HJ5SGooEZ 00:23:57.708 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.8HJ5SGooEZ 00:23:57.708 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:57.970 [2024-11-27 07:19:08.982905] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.970 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:58.233 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:58.233 [2024-11-27 07:19:09.347798] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:58.233 [2024-11-27 07:19:09.348006] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:58.233 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:58.494 malloc0 00:23:58.494 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:58.778 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.8HJ5SGooEZ 00:23:58.778 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:59.039 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2416287 00:23:59.039 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:59.039 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:59.039 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2416287 /var/tmp/bdevperf.sock 00:23:59.039 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2416287 ']' 00:23:59.039 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:59.039 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:59.039 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:59.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:59.039 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:59.039 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.039 [2024-11-27 07:19:10.151732] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:23:59.039 [2024-11-27 07:19:10.151787] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2416287 ] 00:23:59.039 [2024-11-27 07:19:10.239767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.300 [2024-11-27 07:19:10.275055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:59.871 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:59.871 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:59.871 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.8HJ5SGooEZ 00:24:00.132 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:00.132 [2024-11-27 07:19:11.291817] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:00.393 TLSTESTn1 00:24:00.393 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:00.656 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:24:00.656 "subsystems": [ 00:24:00.656 { 00:24:00.656 "subsystem": "keyring", 00:24:00.656 "config": [ 00:24:00.656 { 00:24:00.656 "method": "keyring_file_add_key", 00:24:00.656 "params": { 00:24:00.656 "name": "key0", 00:24:00.656 "path": "/tmp/tmp.8HJ5SGooEZ" 00:24:00.656 } 00:24:00.656 } 00:24:00.656 ] 00:24:00.656 }, 00:24:00.656 { 00:24:00.656 "subsystem": "iobuf", 00:24:00.656 "config": [ 00:24:00.656 { 00:24:00.656 "method": "iobuf_set_options", 00:24:00.656 "params": { 00:24:00.656 "small_pool_count": 8192, 00:24:00.656 "large_pool_count": 1024, 00:24:00.656 "small_bufsize": 8192, 00:24:00.656 "large_bufsize": 135168, 00:24:00.656 "enable_numa": false 00:24:00.656 } 00:24:00.656 } 00:24:00.656 ] 00:24:00.656 }, 00:24:00.656 { 00:24:00.656 "subsystem": "sock", 00:24:00.656 "config": [ 00:24:00.656 { 00:24:00.656 "method": "sock_set_default_impl", 00:24:00.656 "params": { 00:24:00.656 "impl_name": "posix" 00:24:00.656 } 00:24:00.656 }, 00:24:00.656 { 00:24:00.656 "method": "sock_impl_set_options", 00:24:00.656 "params": { 00:24:00.656 "impl_name": "ssl", 00:24:00.656 "recv_buf_size": 4096, 00:24:00.656 "send_buf_size": 4096, 00:24:00.656 "enable_recv_pipe": true, 00:24:00.656 "enable_quickack": false, 00:24:00.656 "enable_placement_id": 0, 00:24:00.656 "enable_zerocopy_send_server": true, 00:24:00.656 "enable_zerocopy_send_client": false, 00:24:00.656 "zerocopy_threshold": 0, 00:24:00.656 "tls_version": 0, 00:24:00.656 "enable_ktls": false 00:24:00.656 } 00:24:00.656 }, 00:24:00.656 { 00:24:00.656 "method": "sock_impl_set_options", 00:24:00.656 "params": { 00:24:00.656 "impl_name": "posix", 00:24:00.656 "recv_buf_size": 2097152, 00:24:00.656 "send_buf_size": 2097152, 00:24:00.657 "enable_recv_pipe": true, 00:24:00.657 "enable_quickack": false, 00:24:00.657 "enable_placement_id": 0, 00:24:00.657 "enable_zerocopy_send_server": true, 00:24:00.657 "enable_zerocopy_send_client": false, 00:24:00.657 "zerocopy_threshold": 0, 00:24:00.657 "tls_version": 0, 00:24:00.657 "enable_ktls": false 00:24:00.657 } 00:24:00.657 } 00:24:00.657 ] 00:24:00.657 }, 00:24:00.657 { 00:24:00.657 "subsystem": "vmd", 00:24:00.657 "config": [] 00:24:00.657 }, 00:24:00.657 { 00:24:00.657 "subsystem": "accel", 00:24:00.657 "config": [ 00:24:00.657 { 00:24:00.657 "method": "accel_set_options", 00:24:00.657 "params": { 00:24:00.657 "small_cache_size": 128, 00:24:00.657 "large_cache_size": 16, 00:24:00.657 "task_count": 2048, 00:24:00.657 "sequence_count": 2048, 00:24:00.657 "buf_count": 2048 00:24:00.657 } 00:24:00.657 } 00:24:00.657 ] 00:24:00.657 }, 00:24:00.657 { 00:24:00.657 "subsystem": "bdev", 00:24:00.657 "config": [ 00:24:00.657 { 00:24:00.657 "method": "bdev_set_options", 00:24:00.657 "params": { 00:24:00.657 "bdev_io_pool_size": 65535, 00:24:00.657 "bdev_io_cache_size": 256, 00:24:00.657 "bdev_auto_examine": true, 00:24:00.657 "iobuf_small_cache_size": 128, 00:24:00.657 "iobuf_large_cache_size": 16 00:24:00.657 } 00:24:00.657 }, 00:24:00.657 { 00:24:00.657 "method": "bdev_raid_set_options", 00:24:00.657 "params": { 00:24:00.657 "process_window_size_kb": 1024, 00:24:00.657 "process_max_bandwidth_mb_sec": 0 00:24:00.657 } 00:24:00.657 }, 00:24:00.657 { 00:24:00.657 "method": "bdev_iscsi_set_options", 00:24:00.657 "params": { 00:24:00.657 "timeout_sec": 30 00:24:00.657 } 00:24:00.657 }, 00:24:00.657 { 00:24:00.657 "method": "bdev_nvme_set_options", 00:24:00.657 "params": { 00:24:00.657 "action_on_timeout": "none", 00:24:00.657 "timeout_us": 0, 00:24:00.657 "timeout_admin_us": 0, 00:24:00.657 "keep_alive_timeout_ms": 10000, 00:24:00.657 "arbitration_burst": 0, 00:24:00.657 "low_priority_weight": 0, 00:24:00.657 "medium_priority_weight": 0, 00:24:00.657 "high_priority_weight": 0, 00:24:00.657 "nvme_adminq_poll_period_us": 10000, 00:24:00.657 "nvme_ioq_poll_period_us": 0, 00:24:00.657 "io_queue_requests": 0, 00:24:00.657 "delay_cmd_submit": true, 00:24:00.657 "transport_retry_count": 4, 00:24:00.657 "bdev_retry_count": 3, 00:24:00.657 "transport_ack_timeout": 0, 00:24:00.657 "ctrlr_loss_timeout_sec": 0, 00:24:00.657 "reconnect_delay_sec": 0, 00:24:00.657 "fast_io_fail_timeout_sec": 0, 00:24:00.657 "disable_auto_failback": false, 00:24:00.657 "generate_uuids": false, 00:24:00.657 "transport_tos": 0, 00:24:00.657 "nvme_error_stat": false, 00:24:00.657 "rdma_srq_size": 0, 00:24:00.657 "io_path_stat": false, 00:24:00.657 "allow_accel_sequence": false, 00:24:00.657 "rdma_max_cq_size": 0, 00:24:00.657 "rdma_cm_event_timeout_ms": 0, 00:24:00.657 "dhchap_digests": [ 00:24:00.657 "sha256", 00:24:00.657 "sha384", 00:24:00.657 "sha512" 00:24:00.657 ], 00:24:00.657 "dhchap_dhgroups": [ 00:24:00.657 "null", 00:24:00.657 "ffdhe2048", 00:24:00.657 "ffdhe3072", 00:24:00.657 "ffdhe4096", 00:24:00.657 "ffdhe6144", 00:24:00.657 "ffdhe8192" 00:24:00.657 ] 00:24:00.657 } 00:24:00.657 }, 00:24:00.657 { 00:24:00.657 "method": "bdev_nvme_set_hotplug", 00:24:00.657 "params": { 00:24:00.657 "period_us": 100000, 00:24:00.657 "enable": false 00:24:00.657 } 00:24:00.657 }, 00:24:00.657 { 00:24:00.657 "method": "bdev_malloc_create", 00:24:00.657 "params": { 00:24:00.657 "name": "malloc0", 00:24:00.657 "num_blocks": 8192, 00:24:00.657 "block_size": 4096, 00:24:00.657 "physical_block_size": 4096, 00:24:00.657 "uuid": "81736328-9563-4825-a75b-8cffeb6c3e0f", 00:24:00.657 "optimal_io_boundary": 0, 00:24:00.657 "md_size": 0, 00:24:00.657 "dif_type": 0, 00:24:00.657 "dif_is_head_of_md": false, 00:24:00.657 "dif_pi_format": 0 00:24:00.657 } 00:24:00.657 }, 00:24:00.657 { 00:24:00.657 "method": "bdev_wait_for_examine" 00:24:00.657 } 00:24:00.657 ] 00:24:00.657 }, 00:24:00.657 { 00:24:00.657 "subsystem": "nbd", 00:24:00.657 "config": [] 00:24:00.657 }, 00:24:00.657 { 00:24:00.657 "subsystem": "scheduler", 00:24:00.657 "config": [ 00:24:00.657 { 00:24:00.657 "method": "framework_set_scheduler", 00:24:00.657 "params": { 00:24:00.657 "name": "static" 00:24:00.657 } 00:24:00.657 } 00:24:00.657 ] 00:24:00.657 }, 00:24:00.657 { 00:24:00.657 "subsystem": "nvmf", 00:24:00.657 "config": [ 00:24:00.657 { 00:24:00.657 "method": "nvmf_set_config", 00:24:00.657 "params": { 00:24:00.657 "discovery_filter": "match_any", 00:24:00.657 "admin_cmd_passthru": { 00:24:00.657 "identify_ctrlr": false 00:24:00.657 }, 00:24:00.657 "dhchap_digests": [ 00:24:00.657 "sha256", 00:24:00.657 "sha384", 00:24:00.657 "sha512" 00:24:00.657 ], 00:24:00.657 "dhchap_dhgroups": [ 00:24:00.657 "null", 00:24:00.657 "ffdhe2048", 00:24:00.657 "ffdhe3072", 00:24:00.657 "ffdhe4096", 00:24:00.657 "ffdhe6144", 00:24:00.657 "ffdhe8192" 00:24:00.657 ] 00:24:00.657 } 00:24:00.657 }, 00:24:00.657 { 00:24:00.657 "method": "nvmf_set_max_subsystems", 00:24:00.657 "params": { 00:24:00.657 "max_subsystems": 1024 00:24:00.657 } 00:24:00.657 }, 00:24:00.657 { 00:24:00.657 "method": "nvmf_set_crdt", 00:24:00.657 "params": { 00:24:00.657 "crdt1": 0, 00:24:00.657 "crdt2": 0, 00:24:00.657 "crdt3": 0 00:24:00.657 } 00:24:00.657 }, 00:24:00.657 { 00:24:00.657 "method": "nvmf_create_transport", 00:24:00.657 "params": { 00:24:00.657 "trtype": "TCP", 00:24:00.657 "max_queue_depth": 128, 00:24:00.657 "max_io_qpairs_per_ctrlr": 127, 00:24:00.657 "in_capsule_data_size": 4096, 00:24:00.657 "max_io_size": 131072, 00:24:00.657 "io_unit_size": 131072, 00:24:00.657 "max_aq_depth": 128, 00:24:00.657 "num_shared_buffers": 511, 00:24:00.657 "buf_cache_size": 4294967295, 00:24:00.657 "dif_insert_or_strip": false, 00:24:00.657 "zcopy": false, 00:24:00.657 "c2h_success": false, 00:24:00.657 "sock_priority": 0, 00:24:00.657 "abort_timeout_sec": 1, 00:24:00.657 "ack_timeout": 0, 00:24:00.657 "data_wr_pool_size": 0 00:24:00.657 } 00:24:00.657 }, 00:24:00.657 { 00:24:00.657 "method": "nvmf_create_subsystem", 00:24:00.657 "params": { 00:24:00.657 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.657 "allow_any_host": false, 00:24:00.657 "serial_number": "SPDK00000000000001", 00:24:00.657 "model_number": "SPDK bdev Controller", 00:24:00.657 "max_namespaces": 10, 00:24:00.657 "min_cntlid": 1, 00:24:00.657 "max_cntlid": 65519, 00:24:00.657 "ana_reporting": false 00:24:00.657 } 00:24:00.657 }, 00:24:00.657 { 00:24:00.657 "method": "nvmf_subsystem_add_host", 00:24:00.657 "params": { 00:24:00.657 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.657 "host": "nqn.2016-06.io.spdk:host1", 00:24:00.657 "psk": "key0" 00:24:00.657 } 00:24:00.657 }, 00:24:00.657 { 00:24:00.657 "method": "nvmf_subsystem_add_ns", 00:24:00.657 "params": { 00:24:00.657 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.657 "namespace": { 00:24:00.657 "nsid": 1, 00:24:00.657 "bdev_name": "malloc0", 00:24:00.657 "nguid": "8173632895634825A75B8CFFEB6C3E0F", 00:24:00.657 "uuid": "81736328-9563-4825-a75b-8cffeb6c3e0f", 00:24:00.657 "no_auto_visible": false 00:24:00.657 } 00:24:00.657 } 00:24:00.657 }, 00:24:00.657 { 00:24:00.657 "method": "nvmf_subsystem_add_listener", 00:24:00.657 "params": { 00:24:00.657 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.657 "listen_address": { 00:24:00.657 "trtype": "TCP", 00:24:00.657 "adrfam": "IPv4", 00:24:00.658 "traddr": "10.0.0.2", 00:24:00.658 "trsvcid": "4420" 00:24:00.658 }, 00:24:00.658 "secure_channel": true 00:24:00.658 } 00:24:00.658 } 00:24:00.658 ] 00:24:00.658 } 00:24:00.658 ] 00:24:00.658 }' 00:24:00.658 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:00.919 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:24:00.919 "subsystems": [ 00:24:00.919 { 00:24:00.919 "subsystem": "keyring", 00:24:00.919 "config": [ 00:24:00.919 { 00:24:00.919 "method": "keyring_file_add_key", 00:24:00.919 "params": { 00:24:00.919 "name": "key0", 00:24:00.919 "path": "/tmp/tmp.8HJ5SGooEZ" 00:24:00.919 } 00:24:00.919 } 00:24:00.919 ] 00:24:00.919 }, 00:24:00.919 { 00:24:00.919 "subsystem": "iobuf", 00:24:00.919 "config": [ 00:24:00.919 { 00:24:00.919 "method": "iobuf_set_options", 00:24:00.919 "params": { 00:24:00.919 "small_pool_count": 8192, 00:24:00.919 "large_pool_count": 1024, 00:24:00.919 "small_bufsize": 8192, 00:24:00.919 "large_bufsize": 135168, 00:24:00.919 "enable_numa": false 00:24:00.919 } 00:24:00.919 } 00:24:00.919 ] 00:24:00.919 }, 00:24:00.919 { 00:24:00.919 "subsystem": "sock", 00:24:00.919 "config": [ 00:24:00.919 { 00:24:00.919 "method": "sock_set_default_impl", 00:24:00.919 "params": { 00:24:00.919 "impl_name": "posix" 00:24:00.919 } 00:24:00.919 }, 00:24:00.919 { 00:24:00.919 "method": "sock_impl_set_options", 00:24:00.919 "params": { 00:24:00.919 "impl_name": "ssl", 00:24:00.919 "recv_buf_size": 4096, 00:24:00.919 "send_buf_size": 4096, 00:24:00.919 "enable_recv_pipe": true, 00:24:00.919 "enable_quickack": false, 00:24:00.919 "enable_placement_id": 0, 00:24:00.919 "enable_zerocopy_send_server": true, 00:24:00.919 "enable_zerocopy_send_client": false, 00:24:00.919 "zerocopy_threshold": 0, 00:24:00.919 "tls_version": 0, 00:24:00.919 "enable_ktls": false 00:24:00.919 } 00:24:00.919 }, 00:24:00.919 { 00:24:00.919 "method": "sock_impl_set_options", 00:24:00.919 "params": { 00:24:00.919 "impl_name": "posix", 00:24:00.919 "recv_buf_size": 2097152, 00:24:00.919 "send_buf_size": 2097152, 00:24:00.919 "enable_recv_pipe": true, 00:24:00.919 "enable_quickack": false, 00:24:00.919 "enable_placement_id": 0, 00:24:00.919 "enable_zerocopy_send_server": true, 00:24:00.919 "enable_zerocopy_send_client": false, 00:24:00.919 "zerocopy_threshold": 0, 00:24:00.919 "tls_version": 0, 00:24:00.919 "enable_ktls": false 00:24:00.919 } 00:24:00.919 } 00:24:00.919 ] 00:24:00.919 }, 00:24:00.919 { 00:24:00.919 "subsystem": "vmd", 00:24:00.919 "config": [] 00:24:00.919 }, 00:24:00.919 { 00:24:00.919 "subsystem": "accel", 00:24:00.919 "config": [ 00:24:00.919 { 00:24:00.919 "method": "accel_set_options", 00:24:00.919 "params": { 00:24:00.919 "small_cache_size": 128, 00:24:00.919 "large_cache_size": 16, 00:24:00.919 "task_count": 2048, 00:24:00.919 "sequence_count": 2048, 00:24:00.919 "buf_count": 2048 00:24:00.919 } 00:24:00.919 } 00:24:00.919 ] 00:24:00.919 }, 00:24:00.919 { 00:24:00.919 "subsystem": "bdev", 00:24:00.919 "config": [ 00:24:00.919 { 00:24:00.919 "method": "bdev_set_options", 00:24:00.919 "params": { 00:24:00.919 "bdev_io_pool_size": 65535, 00:24:00.919 "bdev_io_cache_size": 256, 00:24:00.919 "bdev_auto_examine": true, 00:24:00.919 "iobuf_small_cache_size": 128, 00:24:00.919 "iobuf_large_cache_size": 16 00:24:00.919 } 00:24:00.919 }, 00:24:00.919 { 00:24:00.919 "method": "bdev_raid_set_options", 00:24:00.919 "params": { 00:24:00.919 "process_window_size_kb": 1024, 00:24:00.919 "process_max_bandwidth_mb_sec": 0 00:24:00.919 } 00:24:00.919 }, 00:24:00.919 { 00:24:00.919 "method": "bdev_iscsi_set_options", 00:24:00.919 "params": { 00:24:00.919 "timeout_sec": 30 00:24:00.919 } 00:24:00.919 }, 00:24:00.919 { 00:24:00.919 "method": "bdev_nvme_set_options", 00:24:00.919 "params": { 00:24:00.919 "action_on_timeout": "none", 00:24:00.919 "timeout_us": 0, 00:24:00.919 "timeout_admin_us": 0, 00:24:00.919 "keep_alive_timeout_ms": 10000, 00:24:00.920 "arbitration_burst": 0, 00:24:00.920 "low_priority_weight": 0, 00:24:00.920 "medium_priority_weight": 0, 00:24:00.920 "high_priority_weight": 0, 00:24:00.920 "nvme_adminq_poll_period_us": 10000, 00:24:00.920 "nvme_ioq_poll_period_us": 0, 00:24:00.920 "io_queue_requests": 512, 00:24:00.920 "delay_cmd_submit": true, 00:24:00.920 "transport_retry_count": 4, 00:24:00.920 "bdev_retry_count": 3, 00:24:00.920 "transport_ack_timeout": 0, 00:24:00.920 "ctrlr_loss_timeout_sec": 0, 00:24:00.920 "reconnect_delay_sec": 0, 00:24:00.920 "fast_io_fail_timeout_sec": 0, 00:24:00.920 "disable_auto_failback": false, 00:24:00.920 "generate_uuids": false, 00:24:00.920 "transport_tos": 0, 00:24:00.920 "nvme_error_stat": false, 00:24:00.920 "rdma_srq_size": 0, 00:24:00.920 "io_path_stat": false, 00:24:00.920 "allow_accel_sequence": false, 00:24:00.920 "rdma_max_cq_size": 0, 00:24:00.920 "rdma_cm_event_timeout_ms": 0, 00:24:00.920 "dhchap_digests": [ 00:24:00.920 "sha256", 00:24:00.920 "sha384", 00:24:00.920 "sha512" 00:24:00.920 ], 00:24:00.920 "dhchap_dhgroups": [ 00:24:00.920 "null", 00:24:00.920 "ffdhe2048", 00:24:00.920 "ffdhe3072", 00:24:00.920 "ffdhe4096", 00:24:00.920 "ffdhe6144", 00:24:00.920 "ffdhe8192" 00:24:00.920 ] 00:24:00.920 } 00:24:00.920 }, 00:24:00.920 { 00:24:00.920 "method": "bdev_nvme_attach_controller", 00:24:00.920 "params": { 00:24:00.920 "name": "TLSTEST", 00:24:00.920 "trtype": "TCP", 00:24:00.920 "adrfam": "IPv4", 00:24:00.920 "traddr": "10.0.0.2", 00:24:00.920 "trsvcid": "4420", 00:24:00.920 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.920 "prchk_reftag": false, 00:24:00.920 "prchk_guard": false, 00:24:00.920 "ctrlr_loss_timeout_sec": 0, 00:24:00.920 "reconnect_delay_sec": 0, 00:24:00.920 "fast_io_fail_timeout_sec": 0, 00:24:00.920 "psk": "key0", 00:24:00.920 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:00.920 "hdgst": false, 00:24:00.920 "ddgst": false, 00:24:00.920 "multipath": "multipath" 00:24:00.920 } 00:24:00.920 }, 00:24:00.920 { 00:24:00.920 "method": "bdev_nvme_set_hotplug", 00:24:00.920 "params": { 00:24:00.920 "period_us": 100000, 00:24:00.920 "enable": false 00:24:00.920 } 00:24:00.920 }, 00:24:00.920 { 00:24:00.920 "method": "bdev_wait_for_examine" 00:24:00.920 } 00:24:00.920 ] 00:24:00.920 }, 00:24:00.920 { 00:24:00.920 "subsystem": "nbd", 00:24:00.920 "config": [] 00:24:00.920 } 00:24:00.920 ] 00:24:00.920 }' 00:24:00.920 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2416287 00:24:00.920 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2416287 ']' 00:24:00.920 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2416287 00:24:00.920 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:00.920 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:00.920 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2416287 00:24:00.920 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:00.920 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:00.920 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2416287' 00:24:00.920 killing process with pid 2416287 00:24:00.920 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2416287 00:24:00.920 Received shutdown signal, test time was about 10.000000 seconds 00:24:00.920 00:24:00.920 Latency(us) 00:24:00.920 [2024-11-27T06:19:12.125Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.920 [2024-11-27T06:19:12.125Z] =================================================================================================================== 00:24:00.920 [2024-11-27T06:19:12.125Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:00.920 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2416287 00:24:00.920 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2415921 00:24:00.920 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2415921 ']' 00:24:00.920 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2415921 00:24:00.920 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:00.920 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:00.920 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2415921 00:24:01.182 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:01.182 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:01.182 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2415921' 00:24:01.182 killing process with pid 2415921 00:24:01.182 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2415921 00:24:01.182 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2415921 00:24:01.182 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:01.182 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:01.182 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:01.182 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.182 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:24:01.182 "subsystems": [ 00:24:01.182 { 00:24:01.182 "subsystem": "keyring", 00:24:01.182 "config": [ 00:24:01.182 { 00:24:01.182 "method": "keyring_file_add_key", 00:24:01.182 "params": { 00:24:01.182 "name": "key0", 00:24:01.182 "path": "/tmp/tmp.8HJ5SGooEZ" 00:24:01.182 } 00:24:01.182 } 00:24:01.182 ] 00:24:01.182 }, 00:24:01.182 { 00:24:01.182 "subsystem": "iobuf", 00:24:01.182 "config": [ 00:24:01.182 { 00:24:01.182 "method": "iobuf_set_options", 00:24:01.182 "params": { 00:24:01.182 "small_pool_count": 8192, 00:24:01.182 "large_pool_count": 1024, 00:24:01.182 "small_bufsize": 8192, 00:24:01.182 "large_bufsize": 135168, 00:24:01.182 "enable_numa": false 00:24:01.182 } 00:24:01.182 } 00:24:01.182 ] 00:24:01.182 }, 00:24:01.182 { 00:24:01.182 "subsystem": "sock", 00:24:01.182 "config": [ 00:24:01.182 { 00:24:01.182 "method": "sock_set_default_impl", 00:24:01.182 "params": { 00:24:01.182 "impl_name": "posix" 00:24:01.182 } 00:24:01.182 }, 00:24:01.182 { 00:24:01.182 "method": "sock_impl_set_options", 00:24:01.182 "params": { 00:24:01.182 "impl_name": "ssl", 00:24:01.183 "recv_buf_size": 4096, 00:24:01.183 "send_buf_size": 4096, 00:24:01.183 "enable_recv_pipe": true, 00:24:01.183 "enable_quickack": false, 00:24:01.183 "enable_placement_id": 0, 00:24:01.183 "enable_zerocopy_send_server": true, 00:24:01.183 "enable_zerocopy_send_client": false, 00:24:01.183 "zerocopy_threshold": 0, 00:24:01.183 "tls_version": 0, 00:24:01.183 "enable_ktls": false 00:24:01.183 } 00:24:01.183 }, 00:24:01.183 { 00:24:01.183 "method": "sock_impl_set_options", 00:24:01.183 "params": { 00:24:01.183 "impl_name": "posix", 00:24:01.183 "recv_buf_size": 2097152, 00:24:01.183 "send_buf_size": 2097152, 00:24:01.183 "enable_recv_pipe": true, 00:24:01.183 "enable_quickack": false, 00:24:01.183 "enable_placement_id": 0, 00:24:01.183 "enable_zerocopy_send_server": true, 00:24:01.183 "enable_zerocopy_send_client": false, 00:24:01.183 "zerocopy_threshold": 0, 00:24:01.183 "tls_version": 0, 00:24:01.183 "enable_ktls": false 00:24:01.183 } 00:24:01.183 } 00:24:01.183 ] 00:24:01.183 }, 00:24:01.183 { 00:24:01.183 "subsystem": "vmd", 00:24:01.183 "config": [] 00:24:01.183 }, 00:24:01.183 { 00:24:01.183 "subsystem": "accel", 00:24:01.183 "config": [ 00:24:01.183 { 00:24:01.183 "method": "accel_set_options", 00:24:01.183 "params": { 00:24:01.183 "small_cache_size": 128, 00:24:01.183 "large_cache_size": 16, 00:24:01.183 "task_count": 2048, 00:24:01.183 "sequence_count": 2048, 00:24:01.183 "buf_count": 2048 00:24:01.183 } 00:24:01.183 } 00:24:01.183 ] 00:24:01.183 }, 00:24:01.183 { 00:24:01.183 "subsystem": "bdev", 00:24:01.183 "config": [ 00:24:01.183 { 00:24:01.183 "method": "bdev_set_options", 00:24:01.183 "params": { 00:24:01.183 "bdev_io_pool_size": 65535, 00:24:01.183 "bdev_io_cache_size": 256, 00:24:01.183 "bdev_auto_examine": true, 00:24:01.183 "iobuf_small_cache_size": 128, 00:24:01.183 "iobuf_large_cache_size": 16 00:24:01.183 } 00:24:01.183 }, 00:24:01.183 { 00:24:01.183 "method": "bdev_raid_set_options", 00:24:01.183 "params": { 00:24:01.183 "process_window_size_kb": 1024, 00:24:01.183 "process_max_bandwidth_mb_sec": 0 00:24:01.183 } 00:24:01.183 }, 00:24:01.183 { 00:24:01.183 "method": "bdev_iscsi_set_options", 00:24:01.183 "params": { 00:24:01.183 "timeout_sec": 30 00:24:01.183 } 00:24:01.183 }, 00:24:01.183 { 00:24:01.183 "method": "bdev_nvme_set_options", 00:24:01.183 "params": { 00:24:01.183 "action_on_timeout": "none", 00:24:01.183 "timeout_us": 0, 00:24:01.183 "timeout_admin_us": 0, 00:24:01.183 "keep_alive_timeout_ms": 10000, 00:24:01.183 "arbitration_burst": 0, 00:24:01.183 "low_priority_weight": 0, 00:24:01.183 "medium_priority_weight": 0, 00:24:01.183 "high_priority_weight": 0, 00:24:01.183 "nvme_adminq_poll_period_us": 10000, 00:24:01.183 "nvme_ioq_poll_period_us": 0, 00:24:01.183 "io_queue_requests": 0, 00:24:01.183 "delay_cmd_submit": true, 00:24:01.183 "transport_retry_count": 4, 00:24:01.183 "bdev_retry_count": 3, 00:24:01.183 "transport_ack_timeout": 0, 00:24:01.183 "ctrlr_loss_timeout_sec": 0, 00:24:01.183 "reconnect_delay_sec": 0, 00:24:01.183 "fast_io_fail_timeout_sec": 0, 00:24:01.183 "disable_auto_failback": false, 00:24:01.183 "generate_uuids": false, 00:24:01.183 "transport_tos": 0, 00:24:01.183 "nvme_error_stat": false, 00:24:01.183 "rdma_srq_size": 0, 00:24:01.183 "io_path_stat": false, 00:24:01.183 "allow_accel_sequence": false, 00:24:01.183 "rdma_max_cq_size": 0, 00:24:01.183 "rdma_cm_event_timeout_ms": 0, 00:24:01.183 "dhchap_digests": [ 00:24:01.183 "sha256", 00:24:01.183 "sha384", 00:24:01.183 "sha512" 00:24:01.183 ], 00:24:01.183 "dhchap_dhgroups": [ 00:24:01.183 "null", 00:24:01.183 "ffdhe2048", 00:24:01.183 "ffdhe3072", 00:24:01.183 "ffdhe4096", 00:24:01.183 "ffdhe6144", 00:24:01.183 "ffdhe8192" 00:24:01.183 ] 00:24:01.183 } 00:24:01.183 }, 00:24:01.183 { 00:24:01.183 "method": "bdev_nvme_set_hotplug", 00:24:01.183 "params": { 00:24:01.183 "period_us": 100000, 00:24:01.183 "enable": false 00:24:01.183 } 00:24:01.183 }, 00:24:01.183 { 00:24:01.183 "method": "bdev_malloc_create", 00:24:01.183 "params": { 00:24:01.183 "name": "malloc0", 00:24:01.183 "num_blocks": 8192, 00:24:01.183 "block_size": 4096, 00:24:01.183 "physical_block_size": 4096, 00:24:01.183 "uuid": "81736328-9563-4825-a75b-8cffeb6c3e0f", 00:24:01.183 "optimal_io_boundary": 0, 00:24:01.183 "md_size": 0, 00:24:01.183 "dif_type": 0, 00:24:01.183 "dif_is_head_of_md": false, 00:24:01.183 "dif_pi_format": 0 00:24:01.183 } 00:24:01.183 }, 00:24:01.183 { 00:24:01.183 "method": "bdev_wait_for_examine" 00:24:01.183 } 00:24:01.183 ] 00:24:01.183 }, 00:24:01.183 { 00:24:01.183 "subsystem": "nbd", 00:24:01.183 "config": [] 00:24:01.183 }, 00:24:01.183 { 00:24:01.183 "subsystem": "scheduler", 00:24:01.183 "config": [ 00:24:01.183 { 00:24:01.183 "method": "framework_set_scheduler", 00:24:01.183 "params": { 00:24:01.183 "name": "static" 00:24:01.183 } 00:24:01.183 } 00:24:01.183 ] 00:24:01.183 }, 00:24:01.183 { 00:24:01.183 "subsystem": "nvmf", 00:24:01.183 "config": [ 00:24:01.183 { 00:24:01.183 "method": "nvmf_set_config", 00:24:01.183 "params": { 00:24:01.183 "discovery_filter": "match_any", 00:24:01.183 "admin_cmd_passthru": { 00:24:01.183 "identify_ctrlr": false 00:24:01.183 }, 00:24:01.183 "dhchap_digests": [ 00:24:01.183 "sha256", 00:24:01.183 "sha384", 00:24:01.183 "sha512" 00:24:01.183 ], 00:24:01.183 "dhchap_dhgroups": [ 00:24:01.183 "null", 00:24:01.183 "ffdhe2048", 00:24:01.183 "ffdhe3072", 00:24:01.183 "ffdhe4096", 00:24:01.183 "ffdhe6144", 00:24:01.183 "ffdhe8192" 00:24:01.183 ] 00:24:01.183 } 00:24:01.183 }, 00:24:01.183 { 00:24:01.183 "method": "nvmf_set_max_subsystems", 00:24:01.183 "params": { 00:24:01.183 "max_subsystems": 1024 00:24:01.183 } 00:24:01.183 }, 00:24:01.183 { 00:24:01.183 "method": "nvmf_set_crdt", 00:24:01.183 "params": { 00:24:01.183 "crdt1": 0, 00:24:01.183 "crdt2": 0, 00:24:01.183 "crdt3": 0 00:24:01.183 } 00:24:01.183 }, 00:24:01.183 { 00:24:01.183 "method": "nvmf_create_transport", 00:24:01.183 "params": { 00:24:01.183 "trtype": "TCP", 00:24:01.183 "max_queue_depth": 128, 00:24:01.183 "max_io_qpairs_per_ctrlr": 127, 00:24:01.183 "in_capsule_data_size": 4096, 00:24:01.183 "max_io_size": 131072, 00:24:01.183 "io_unit_size": 131072, 00:24:01.183 "max_aq_depth": 128, 00:24:01.183 "num_shared_buffers": 511, 00:24:01.183 "buf_cache_size": 4294967295, 00:24:01.183 "dif_insert_or_strip": false, 00:24:01.183 "zcopy": false, 00:24:01.183 "c2h_success": false, 00:24:01.183 "sock_priority": 0, 00:24:01.183 "abort_timeout_sec": 1, 00:24:01.183 "ack_timeout": 0, 00:24:01.183 "data_wr_pool_size": 0 00:24:01.183 } 00:24:01.183 }, 00:24:01.183 { 00:24:01.183 "method": "nvmf_create_subsystem", 00:24:01.183 "params": { 00:24:01.183 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.183 "allow_any_host": false, 00:24:01.183 "serial_number": "SPDK00000000000001", 00:24:01.183 "model_number": "SPDK bdev Controller", 00:24:01.183 "max_namespaces": 10, 00:24:01.183 "min_cntlid": 1, 00:24:01.183 "max_cntlid": 65519, 00:24:01.184 "ana_reporting": false 00:24:01.184 } 00:24:01.184 }, 00:24:01.184 { 00:24:01.184 "method": "nvmf_subsystem_add_host", 00:24:01.184 "params": { 00:24:01.184 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.184 "host": "nqn.2016-06.io.spdk:host1", 00:24:01.184 "psk": "key0" 00:24:01.184 } 00:24:01.184 }, 00:24:01.184 { 00:24:01.184 "method": "nvmf_subsystem_add_ns", 00:24:01.184 "params": { 00:24:01.184 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.184 "namespace": { 00:24:01.184 "nsid": 1, 00:24:01.184 "bdev_name": "malloc0", 00:24:01.184 "nguid": "8173632895634825A75B8CFFEB6C3E0F", 00:24:01.184 "uuid": "81736328-9563-4825-a75b-8cffeb6c3e0f", 00:24:01.184 "no_auto_visible": false 00:24:01.184 } 00:24:01.184 } 00:24:01.184 }, 00:24:01.184 { 00:24:01.184 "method": "nvmf_subsystem_add_listener", 00:24:01.184 "params": { 00:24:01.184 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.184 "listen_address": { 00:24:01.184 "trtype": "TCP", 00:24:01.184 "adrfam": "IPv4", 00:24:01.184 "traddr": "10.0.0.2", 00:24:01.184 "trsvcid": "4420" 00:24:01.184 }, 00:24:01.184 "secure_channel": true 00:24:01.184 } 00:24:01.184 } 00:24:01.184 ] 00:24:01.184 } 00:24:01.184 ] 00:24:01.184 }' 00:24:01.184 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2416674 00:24:01.184 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2416674 00:24:01.184 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:01.184 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2416674 ']' 00:24:01.184 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.184 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:01.184 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.184 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:01.184 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.184 [2024-11-27 07:19:12.326969] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:24:01.184 [2024-11-27 07:19:12.327028] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:01.445 [2024-11-27 07:19:12.418820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.445 [2024-11-27 07:19:12.454141] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.445 [2024-11-27 07:19:12.454187] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.445 [2024-11-27 07:19:12.454193] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:01.445 [2024-11-27 07:19:12.454198] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:01.445 [2024-11-27 07:19:12.454202] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.445 [2024-11-27 07:19:12.454775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.445 [2024-11-27 07:19:12.648564] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.706 [2024-11-27 07:19:12.680590] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:01.706 [2024-11-27 07:19:12.680806] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.966 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:01.966 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:01.966 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:01.966 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:01.966 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.966 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.966 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2416994 00:24:01.966 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2416994 /var/tmp/bdevperf.sock 00:24:01.966 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2416994 ']' 00:24:01.966 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:01.966 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:01.966 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:01.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:01.966 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:02.227 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:02.227 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:02.227 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:24:02.227 "subsystems": [ 00:24:02.227 { 00:24:02.227 "subsystem": "keyring", 00:24:02.227 "config": [ 00:24:02.227 { 00:24:02.227 "method": "keyring_file_add_key", 00:24:02.227 "params": { 00:24:02.227 "name": "key0", 00:24:02.227 "path": "/tmp/tmp.8HJ5SGooEZ" 00:24:02.227 } 00:24:02.227 } 00:24:02.227 ] 00:24:02.227 }, 00:24:02.227 { 00:24:02.227 "subsystem": "iobuf", 00:24:02.227 "config": [ 00:24:02.227 { 00:24:02.227 "method": "iobuf_set_options", 00:24:02.227 "params": { 00:24:02.227 "small_pool_count": 8192, 00:24:02.227 "large_pool_count": 1024, 00:24:02.227 "small_bufsize": 8192, 00:24:02.227 "large_bufsize": 135168, 00:24:02.227 "enable_numa": false 00:24:02.227 } 00:24:02.227 } 00:24:02.227 ] 00:24:02.227 }, 00:24:02.227 { 00:24:02.227 "subsystem": "sock", 00:24:02.227 "config": [ 00:24:02.227 { 00:24:02.227 "method": "sock_set_default_impl", 00:24:02.227 "params": { 00:24:02.227 "impl_name": "posix" 00:24:02.227 } 00:24:02.227 }, 00:24:02.227 { 00:24:02.227 "method": "sock_impl_set_options", 00:24:02.227 "params": { 00:24:02.227 "impl_name": "ssl", 00:24:02.227 "recv_buf_size": 4096, 00:24:02.227 "send_buf_size": 4096, 00:24:02.227 "enable_recv_pipe": true, 00:24:02.227 "enable_quickack": false, 00:24:02.227 "enable_placement_id": 0, 00:24:02.227 "enable_zerocopy_send_server": true, 00:24:02.227 "enable_zerocopy_send_client": false, 00:24:02.227 "zerocopy_threshold": 0, 00:24:02.227 "tls_version": 0, 00:24:02.227 "enable_ktls": false 00:24:02.227 } 00:24:02.227 }, 00:24:02.227 { 00:24:02.227 "method": "sock_impl_set_options", 00:24:02.227 "params": { 00:24:02.227 "impl_name": "posix", 00:24:02.227 "recv_buf_size": 2097152, 00:24:02.227 "send_buf_size": 2097152, 00:24:02.227 "enable_recv_pipe": true, 00:24:02.227 "enable_quickack": false, 00:24:02.227 "enable_placement_id": 0, 00:24:02.227 "enable_zerocopy_send_server": true, 00:24:02.227 "enable_zerocopy_send_client": false, 00:24:02.227 "zerocopy_threshold": 0, 00:24:02.227 "tls_version": 0, 00:24:02.227 "enable_ktls": false 00:24:02.227 } 00:24:02.227 } 00:24:02.227 ] 00:24:02.227 }, 00:24:02.227 { 00:24:02.227 "subsystem": "vmd", 00:24:02.227 "config": [] 00:24:02.227 }, 00:24:02.227 { 00:24:02.227 "subsystem": "accel", 00:24:02.227 "config": [ 00:24:02.227 { 00:24:02.227 "method": "accel_set_options", 00:24:02.227 "params": { 00:24:02.227 "small_cache_size": 128, 00:24:02.227 "large_cache_size": 16, 00:24:02.227 "task_count": 2048, 00:24:02.227 "sequence_count": 2048, 00:24:02.227 "buf_count": 2048 00:24:02.227 } 00:24:02.227 } 00:24:02.227 ] 00:24:02.227 }, 00:24:02.227 { 00:24:02.227 "subsystem": "bdev", 00:24:02.227 "config": [ 00:24:02.227 { 00:24:02.227 "method": "bdev_set_options", 00:24:02.227 "params": { 00:24:02.227 "bdev_io_pool_size": 65535, 00:24:02.227 "bdev_io_cache_size": 256, 00:24:02.227 "bdev_auto_examine": true, 00:24:02.227 "iobuf_small_cache_size": 128, 00:24:02.227 "iobuf_large_cache_size": 16 00:24:02.227 } 00:24:02.227 }, 00:24:02.227 { 00:24:02.227 "method": "bdev_raid_set_options", 00:24:02.227 "params": { 00:24:02.227 "process_window_size_kb": 1024, 00:24:02.227 "process_max_bandwidth_mb_sec": 0 00:24:02.227 } 00:24:02.227 }, 00:24:02.227 { 00:24:02.227 "method": "bdev_iscsi_set_options", 00:24:02.227 "params": { 00:24:02.227 "timeout_sec": 30 00:24:02.227 } 00:24:02.227 }, 00:24:02.227 { 00:24:02.227 "method": "bdev_nvme_set_options", 00:24:02.228 "params": { 00:24:02.228 "action_on_timeout": "none", 00:24:02.228 "timeout_us": 0, 00:24:02.228 "timeout_admin_us": 0, 00:24:02.228 "keep_alive_timeout_ms": 10000, 00:24:02.228 "arbitration_burst": 0, 00:24:02.228 "low_priority_weight": 0, 00:24:02.228 "medium_priority_weight": 0, 00:24:02.228 "high_priority_weight": 0, 00:24:02.228 "nvme_adminq_poll_period_us": 10000, 00:24:02.228 "nvme_ioq_poll_period_us": 0, 00:24:02.228 "io_queue_requests": 512, 00:24:02.228 "delay_cmd_submit": true, 00:24:02.228 "transport_retry_count": 4, 00:24:02.228 "bdev_retry_count": 3, 00:24:02.228 "transport_ack_timeout": 0, 00:24:02.228 "ctrlr_loss_timeout_sec": 0, 00:24:02.228 "reconnect_delay_sec": 0, 00:24:02.228 "fast_io_fail_timeout_sec": 0, 00:24:02.228 "disable_auto_failback": false, 00:24:02.228 "generate_uuids": false, 00:24:02.228 "transport_tos": 0, 00:24:02.228 "nvme_error_stat": false, 00:24:02.228 "rdma_srq_size": 0, 00:24:02.228 "io_path_stat": false, 00:24:02.228 "allow_accel_sequence": false, 00:24:02.228 "rdma_max_cq_size": 0, 00:24:02.228 "rdma_cm_event_timeout_ms": 0, 00:24:02.228 "dhchap_digests": [ 00:24:02.228 "sha256", 00:24:02.228 "sha384", 00:24:02.228 "sha512" 00:24:02.228 ], 00:24:02.228 "dhchap_dhgroups": [ 00:24:02.228 "null", 00:24:02.228 "ffdhe2048", 00:24:02.228 "ffdhe3072", 00:24:02.228 "ffdhe4096", 00:24:02.228 "ffdhe6144", 00:24:02.228 "ffdhe8192" 00:24:02.228 ] 00:24:02.228 } 00:24:02.228 }, 00:24:02.228 { 00:24:02.228 "method": "bdev_nvme_attach_controller", 00:24:02.228 "params": { 00:24:02.228 "name": "TLSTEST", 00:24:02.228 "trtype": "TCP", 00:24:02.228 "adrfam": "IPv4", 00:24:02.228 "traddr": "10.0.0.2", 00:24:02.228 "trsvcid": "4420", 00:24:02.228 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.228 "prchk_reftag": false, 00:24:02.228 "prchk_guard": false, 00:24:02.228 "ctrlr_loss_timeout_sec": 0, 00:24:02.228 "reconnect_delay_sec": 0, 00:24:02.228 "fast_io_fail_timeout_sec": 0, 00:24:02.228 "psk": "key0", 00:24:02.228 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:02.228 "hdgst": false, 00:24:02.228 "ddgst": false, 00:24:02.228 "multipath": "multipath" 00:24:02.228 } 00:24:02.228 }, 00:24:02.228 { 00:24:02.228 "method": "bdev_nvme_set_hotplug", 00:24:02.228 "params": { 00:24:02.228 "period_us": 100000, 00:24:02.228 "enable": false 00:24:02.228 } 00:24:02.228 }, 00:24:02.228 { 00:24:02.228 "method": "bdev_wait_for_examine" 00:24:02.228 } 00:24:02.228 ] 00:24:02.228 }, 00:24:02.228 { 00:24:02.228 "subsystem": "nbd", 00:24:02.228 "config": [] 00:24:02.228 } 00:24:02.228 ] 00:24:02.228 }' 00:24:02.228 [2024-11-27 07:19:13.218116] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:24:02.228 [2024-11-27 07:19:13.218172] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2416994 ] 00:24:02.228 [2024-11-27 07:19:13.304531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.228 [2024-11-27 07:19:13.339790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:02.489 [2024-11-27 07:19:13.480261] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:03.062 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:03.062 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:03.062 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:03.062 Running I/O for 10 seconds... 00:24:04.948 3684.00 IOPS, 14.39 MiB/s [2024-11-27T06:19:17.536Z] 4688.50 IOPS, 18.31 MiB/s [2024-11-27T06:19:18.481Z] 4604.67 IOPS, 17.99 MiB/s [2024-11-27T06:19:19.422Z] 4803.50 IOPS, 18.76 MiB/s [2024-11-27T06:19:20.363Z] 5019.80 IOPS, 19.61 MiB/s [2024-11-27T06:19:21.307Z] 5186.00 IOPS, 20.26 MiB/s [2024-11-27T06:19:22.250Z] 5315.86 IOPS, 20.77 MiB/s [2024-11-27T06:19:23.191Z] 5280.50 IOPS, 20.63 MiB/s [2024-11-27T06:19:24.197Z] 5270.78 IOPS, 20.59 MiB/s [2024-11-27T06:19:24.197Z] 5394.10 IOPS, 21.07 MiB/s 00:24:12.992 Latency(us) 00:24:12.992 [2024-11-27T06:19:24.197Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.992 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:12.992 Verification LBA range: start 0x0 length 0x2000 00:24:12.992 TLSTESTn1 : 10.02 5396.84 21.08 0.00 0.00 23678.71 5242.88 65972.91 00:24:12.992 [2024-11-27T06:19:24.197Z] =================================================================================================================== 00:24:12.992 [2024-11-27T06:19:24.197Z] Total : 5396.84 21.08 0.00 0.00 23678.71 5242.88 65972.91 00:24:12.992 { 00:24:12.992 "results": [ 00:24:12.992 { 00:24:12.992 "job": "TLSTESTn1", 00:24:12.992 "core_mask": "0x4", 00:24:12.992 "workload": "verify", 00:24:12.992 "status": "finished", 00:24:12.992 "verify_range": { 00:24:12.992 "start": 0, 00:24:12.992 "length": 8192 00:24:12.992 }, 00:24:12.992 "queue_depth": 128, 00:24:12.992 "io_size": 4096, 00:24:12.992 "runtime": 10.018463, 00:24:12.992 "iops": 5396.8358220218015, 00:24:12.992 "mibps": 21.081389929772662, 00:24:12.992 "io_failed": 0, 00:24:12.992 "io_timeout": 0, 00:24:12.992 "avg_latency_us": 23678.708478705827, 00:24:12.992 "min_latency_us": 5242.88, 00:24:12.992 "max_latency_us": 65972.90666666666 00:24:12.992 } 00:24:12.992 ], 00:24:12.992 "core_count": 1 00:24:12.992 } 00:24:12.992 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:12.992 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2416994 00:24:12.992 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2416994 ']' 00:24:12.992 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2416994 00:24:12.992 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:12.992 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:12.992 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2416994 00:24:13.252 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:13.252 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:13.252 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2416994' 00:24:13.252 killing process with pid 2416994 00:24:13.252 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2416994 00:24:13.252 Received shutdown signal, test time was about 10.000000 seconds 00:24:13.252 00:24:13.252 Latency(us) 00:24:13.252 [2024-11-27T06:19:24.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.252 [2024-11-27T06:19:24.457Z] =================================================================================================================== 00:24:13.252 [2024-11-27T06:19:24.457Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:13.252 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2416994 00:24:13.252 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2416674 00:24:13.252 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2416674 ']' 00:24:13.252 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2416674 00:24:13.252 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:13.252 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:13.252 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2416674 00:24:13.252 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:13.252 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:13.252 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2416674' 00:24:13.252 killing process with pid 2416674 00:24:13.252 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2416674 00:24:13.252 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2416674 00:24:13.512 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:13.512 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:13.512 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:13.512 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:13.512 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2419062 00:24:13.513 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2419062 00:24:13.513 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:13.513 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2419062 ']' 00:24:13.513 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.513 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:13.513 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.513 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:13.513 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:13.513 [2024-11-27 07:19:24.550934] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:24:13.513 [2024-11-27 07:19:24.550991] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:13.513 [2024-11-27 07:19:24.625767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.513 [2024-11-27 07:19:24.660271] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:13.513 [2024-11-27 07:19:24.660305] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:13.513 [2024-11-27 07:19:24.660314] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:13.513 [2024-11-27 07:19:24.660322] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:13.513 [2024-11-27 07:19:24.660330] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:13.513 [2024-11-27 07:19:24.660881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.452 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:14.452 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:14.452 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:14.452 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:14.452 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.452 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:14.452 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.8HJ5SGooEZ 00:24:14.452 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.8HJ5SGooEZ 00:24:14.452 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:14.452 [2024-11-27 07:19:25.559862] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:14.452 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:14.712 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:14.973 [2024-11-27 07:19:25.940817] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:14.973 [2024-11-27 07:19:25.941157] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:14.973 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:14.973 malloc0 00:24:14.973 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:15.232 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.8HJ5SGooEZ 00:24:15.492 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:15.492 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2419627 00:24:15.492 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:15.492 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:15.492 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2419627 /var/tmp/bdevperf.sock 00:24:15.492 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2419627 ']' 00:24:15.492 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:15.492 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:15.492 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:15.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:15.492 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:15.492 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.753 [2024-11-27 07:19:26.725313] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:24:15.753 [2024-11-27 07:19:26.725385] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2419627 ] 00:24:15.753 [2024-11-27 07:19:26.812904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.753 [2024-11-27 07:19:26.846573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.694 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:16.694 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:16.694 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.8HJ5SGooEZ 00:24:16.694 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:16.694 [2024-11-27 07:19:27.861333] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:16.955 nvme0n1 00:24:16.955 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:16.955 Running I/O for 1 seconds... 00:24:17.898 5146.00 IOPS, 20.10 MiB/s 00:24:17.898 Latency(us) 00:24:17.898 [2024-11-27T06:19:29.103Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.898 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:17.898 Verification LBA range: start 0x0 length 0x2000 00:24:17.898 nvme0n1 : 1.03 5123.66 20.01 0.00 0.00 24669.28 5679.79 36263.25 00:24:17.898 [2024-11-27T06:19:29.103Z] =================================================================================================================== 00:24:17.898 [2024-11-27T06:19:29.103Z] Total : 5123.66 20.01 0.00 0.00 24669.28 5679.79 36263.25 00:24:17.899 { 00:24:17.899 "results": [ 00:24:17.899 { 00:24:17.899 "job": "nvme0n1", 00:24:17.899 "core_mask": "0x2", 00:24:17.899 "workload": "verify", 00:24:17.899 "status": "finished", 00:24:17.899 "verify_range": { 00:24:17.899 "start": 0, 00:24:17.899 "length": 8192 00:24:17.899 }, 00:24:17.899 "queue_depth": 128, 00:24:17.899 "io_size": 4096, 00:24:17.899 "runtime": 1.029538, 00:24:17.899 "iops": 5123.657407497343, 00:24:17.899 "mibps": 20.014286748036497, 00:24:17.899 "io_failed": 0, 00:24:17.899 "io_timeout": 0, 00:24:17.899 "avg_latency_us": 24669.282679304895, 00:24:17.899 "min_latency_us": 5679.786666666667, 00:24:17.899 "max_latency_us": 36263.253333333334 00:24:17.899 } 00:24:17.899 ], 00:24:17.899 "core_count": 1 00:24:17.899 } 00:24:17.899 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2419627 00:24:17.899 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2419627 ']' 00:24:17.899 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2419627 00:24:17.899 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:18.159 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:18.159 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2419627 00:24:18.159 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:18.159 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:18.159 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2419627' 00:24:18.159 killing process with pid 2419627 00:24:18.159 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2419627 00:24:18.159 Received shutdown signal, test time was about 1.000000 seconds 00:24:18.159 00:24:18.159 Latency(us) 00:24:18.159 [2024-11-27T06:19:29.364Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.159 [2024-11-27T06:19:29.364Z] =================================================================================================================== 00:24:18.159 [2024-11-27T06:19:29.364Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:18.159 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2419627 00:24:18.159 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2419062 00:24:18.159 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2419062 ']' 00:24:18.159 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2419062 00:24:18.159 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:18.159 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:18.159 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2419062 00:24:18.159 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:18.159 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:18.159 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2419062' 00:24:18.159 killing process with pid 2419062 00:24:18.159 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2419062 00:24:18.159 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2419062 00:24:18.420 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:18.420 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:18.420 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:18.420 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.420 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2420080 00:24:18.420 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2420080 00:24:18.420 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:18.420 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2420080 ']' 00:24:18.420 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:18.420 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:18.420 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:18.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:18.420 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:18.420 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.421 [2024-11-27 07:19:29.526801] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:24:18.421 [2024-11-27 07:19:29.526857] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:18.421 [2024-11-27 07:19:29.620933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.682 [2024-11-27 07:19:29.669974] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:18.682 [2024-11-27 07:19:29.670026] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:18.682 [2024-11-27 07:19:29.670034] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:18.682 [2024-11-27 07:19:29.670041] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:18.682 [2024-11-27 07:19:29.670048] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:18.682 [2024-11-27 07:19:29.670828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.255 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:19.255 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:19.255 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:19.255 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:19.255 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.255 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:19.255 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:19.255 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.255 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.255 [2024-11-27 07:19:30.359520] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:19.255 malloc0 00:24:19.255 [2024-11-27 07:19:30.386392] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:19.255 [2024-11-27 07:19:30.386648] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:19.255 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.255 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2420428 00:24:19.255 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2420428 /var/tmp/bdevperf.sock 00:24:19.255 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:19.255 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2420428 ']' 00:24:19.255 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:19.255 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:19.255 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:19.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:19.255 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:19.255 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.516 [2024-11-27 07:19:30.466679] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:24:19.516 [2024-11-27 07:19:30.466732] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2420428 ] 00:24:19.516 [2024-11-27 07:19:30.526024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.516 [2024-11-27 07:19:30.555939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.516 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:19.516 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:19.516 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.8HJ5SGooEZ 00:24:19.777 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:19.777 [2024-11-27 07:19:30.979031] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:20.039 nvme0n1 00:24:20.039 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:20.039 Running I/O for 1 seconds... 00:24:21.039 5273.00 IOPS, 20.60 MiB/s 00:24:21.039 Latency(us) 00:24:21.039 [2024-11-27T06:19:32.244Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.039 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:21.039 Verification LBA range: start 0x0 length 0x2000 00:24:21.039 nvme0n1 : 1.01 5339.98 20.86 0.00 0.00 23826.83 4751.36 109226.67 00:24:21.039 [2024-11-27T06:19:32.244Z] =================================================================================================================== 00:24:21.039 [2024-11-27T06:19:32.244Z] Total : 5339.98 20.86 0.00 0.00 23826.83 4751.36 109226.67 00:24:21.039 { 00:24:21.039 "results": [ 00:24:21.039 { 00:24:21.039 "job": "nvme0n1", 00:24:21.039 "core_mask": "0x2", 00:24:21.039 "workload": "verify", 00:24:21.039 "status": "finished", 00:24:21.039 "verify_range": { 00:24:21.039 "start": 0, 00:24:21.039 "length": 8192 00:24:21.039 }, 00:24:21.039 "queue_depth": 128, 00:24:21.039 "io_size": 4096, 00:24:21.039 "runtime": 1.011427, 00:24:21.039 "iops": 5339.9800479916, 00:24:21.039 "mibps": 20.85929706246719, 00:24:21.039 "io_failed": 0, 00:24:21.039 "io_timeout": 0, 00:24:21.039 "avg_latency_us": 23826.833687588718, 00:24:21.039 "min_latency_us": 4751.36, 00:24:21.039 "max_latency_us": 109226.66666666667 00:24:21.039 } 00:24:21.039 ], 00:24:21.039 "core_count": 1 00:24:21.039 } 00:24:21.039 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:21.039 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.039 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:21.301 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.301 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:21.301 "subsystems": [ 00:24:21.301 { 00:24:21.301 "subsystem": "keyring", 00:24:21.301 "config": [ 00:24:21.301 { 00:24:21.301 "method": "keyring_file_add_key", 00:24:21.301 "params": { 00:24:21.301 "name": "key0", 00:24:21.301 "path": "/tmp/tmp.8HJ5SGooEZ" 00:24:21.301 } 00:24:21.301 } 00:24:21.301 ] 00:24:21.301 }, 00:24:21.301 { 00:24:21.301 "subsystem": "iobuf", 00:24:21.301 "config": [ 00:24:21.301 { 00:24:21.301 "method": "iobuf_set_options", 00:24:21.301 "params": { 00:24:21.301 "small_pool_count": 8192, 00:24:21.301 "large_pool_count": 1024, 00:24:21.301 "small_bufsize": 8192, 00:24:21.301 "large_bufsize": 135168, 00:24:21.301 "enable_numa": false 00:24:21.301 } 00:24:21.301 } 00:24:21.301 ] 00:24:21.301 }, 00:24:21.301 { 00:24:21.301 "subsystem": "sock", 00:24:21.301 "config": [ 00:24:21.301 { 00:24:21.301 "method": "sock_set_default_impl", 00:24:21.301 "params": { 00:24:21.301 "impl_name": "posix" 00:24:21.301 } 00:24:21.301 }, 00:24:21.301 { 00:24:21.301 "method": "sock_impl_set_options", 00:24:21.301 "params": { 00:24:21.301 "impl_name": "ssl", 00:24:21.301 "recv_buf_size": 4096, 00:24:21.301 "send_buf_size": 4096, 00:24:21.301 "enable_recv_pipe": true, 00:24:21.301 "enable_quickack": false, 00:24:21.301 "enable_placement_id": 0, 00:24:21.301 "enable_zerocopy_send_server": true, 00:24:21.301 "enable_zerocopy_send_client": false, 00:24:21.301 "zerocopy_threshold": 0, 00:24:21.301 "tls_version": 0, 00:24:21.301 "enable_ktls": false 00:24:21.301 } 00:24:21.301 }, 00:24:21.301 { 00:24:21.301 "method": "sock_impl_set_options", 00:24:21.301 "params": { 00:24:21.301 "impl_name": "posix", 00:24:21.301 "recv_buf_size": 2097152, 00:24:21.301 "send_buf_size": 2097152, 00:24:21.301 "enable_recv_pipe": true, 00:24:21.301 "enable_quickack": false, 00:24:21.301 "enable_placement_id": 0, 00:24:21.301 "enable_zerocopy_send_server": true, 00:24:21.301 "enable_zerocopy_send_client": false, 00:24:21.301 "zerocopy_threshold": 0, 00:24:21.301 "tls_version": 0, 00:24:21.301 "enable_ktls": false 00:24:21.301 } 00:24:21.301 } 00:24:21.301 ] 00:24:21.301 }, 00:24:21.301 { 00:24:21.301 "subsystem": "vmd", 00:24:21.301 "config": [] 00:24:21.301 }, 00:24:21.301 { 00:24:21.301 "subsystem": "accel", 00:24:21.301 "config": [ 00:24:21.301 { 00:24:21.301 "method": "accel_set_options", 00:24:21.301 "params": { 00:24:21.301 "small_cache_size": 128, 00:24:21.301 "large_cache_size": 16, 00:24:21.301 "task_count": 2048, 00:24:21.301 "sequence_count": 2048, 00:24:21.301 "buf_count": 2048 00:24:21.301 } 00:24:21.301 } 00:24:21.301 ] 00:24:21.301 }, 00:24:21.301 { 00:24:21.301 "subsystem": "bdev", 00:24:21.301 "config": [ 00:24:21.301 { 00:24:21.301 "method": "bdev_set_options", 00:24:21.301 "params": { 00:24:21.301 "bdev_io_pool_size": 65535, 00:24:21.301 "bdev_io_cache_size": 256, 00:24:21.301 "bdev_auto_examine": true, 00:24:21.301 "iobuf_small_cache_size": 128, 00:24:21.301 "iobuf_large_cache_size": 16 00:24:21.301 } 00:24:21.301 }, 00:24:21.301 { 00:24:21.301 "method": "bdev_raid_set_options", 00:24:21.301 "params": { 00:24:21.301 "process_window_size_kb": 1024, 00:24:21.301 "process_max_bandwidth_mb_sec": 0 00:24:21.301 } 00:24:21.301 }, 00:24:21.301 { 00:24:21.301 "method": "bdev_iscsi_set_options", 00:24:21.301 "params": { 00:24:21.301 "timeout_sec": 30 00:24:21.301 } 00:24:21.301 }, 00:24:21.301 { 00:24:21.301 "method": "bdev_nvme_set_options", 00:24:21.301 "params": { 00:24:21.301 "action_on_timeout": "none", 00:24:21.301 "timeout_us": 0, 00:24:21.301 "timeout_admin_us": 0, 00:24:21.301 "keep_alive_timeout_ms": 10000, 00:24:21.301 "arbitration_burst": 0, 00:24:21.301 "low_priority_weight": 0, 00:24:21.301 "medium_priority_weight": 0, 00:24:21.301 "high_priority_weight": 0, 00:24:21.301 "nvme_adminq_poll_period_us": 10000, 00:24:21.301 "nvme_ioq_poll_period_us": 0, 00:24:21.301 "io_queue_requests": 0, 00:24:21.301 "delay_cmd_submit": true, 00:24:21.301 "transport_retry_count": 4, 00:24:21.301 "bdev_retry_count": 3, 00:24:21.301 "transport_ack_timeout": 0, 00:24:21.301 "ctrlr_loss_timeout_sec": 0, 00:24:21.301 "reconnect_delay_sec": 0, 00:24:21.301 "fast_io_fail_timeout_sec": 0, 00:24:21.301 "disable_auto_failback": false, 00:24:21.301 "generate_uuids": false, 00:24:21.301 "transport_tos": 0, 00:24:21.301 "nvme_error_stat": false, 00:24:21.301 "rdma_srq_size": 0, 00:24:21.301 "io_path_stat": false, 00:24:21.301 "allow_accel_sequence": false, 00:24:21.301 "rdma_max_cq_size": 0, 00:24:21.301 "rdma_cm_event_timeout_ms": 0, 00:24:21.301 "dhchap_digests": [ 00:24:21.301 "sha256", 00:24:21.301 "sha384", 00:24:21.301 "sha512" 00:24:21.301 ], 00:24:21.301 "dhchap_dhgroups": [ 00:24:21.301 "null", 00:24:21.301 "ffdhe2048", 00:24:21.301 "ffdhe3072", 00:24:21.301 "ffdhe4096", 00:24:21.301 "ffdhe6144", 00:24:21.301 "ffdhe8192" 00:24:21.301 ] 00:24:21.301 } 00:24:21.301 }, 00:24:21.301 { 00:24:21.301 "method": "bdev_nvme_set_hotplug", 00:24:21.301 "params": { 00:24:21.301 "period_us": 100000, 00:24:21.301 "enable": false 00:24:21.301 } 00:24:21.301 }, 00:24:21.301 { 00:24:21.301 "method": "bdev_malloc_create", 00:24:21.301 "params": { 00:24:21.301 "name": "malloc0", 00:24:21.301 "num_blocks": 8192, 00:24:21.301 "block_size": 4096, 00:24:21.301 "physical_block_size": 4096, 00:24:21.301 "uuid": "cd254559-e367-49a6-92ef-466e0ba8d7cc", 00:24:21.301 "optimal_io_boundary": 0, 00:24:21.301 "md_size": 0, 00:24:21.301 "dif_type": 0, 00:24:21.301 "dif_is_head_of_md": false, 00:24:21.301 "dif_pi_format": 0 00:24:21.301 } 00:24:21.301 }, 00:24:21.302 { 00:24:21.302 "method": "bdev_wait_for_examine" 00:24:21.302 } 00:24:21.302 ] 00:24:21.302 }, 00:24:21.302 { 00:24:21.302 "subsystem": "nbd", 00:24:21.302 "config": [] 00:24:21.302 }, 00:24:21.302 { 00:24:21.302 "subsystem": "scheduler", 00:24:21.302 "config": [ 00:24:21.302 { 00:24:21.302 "method": "framework_set_scheduler", 00:24:21.302 "params": { 00:24:21.302 "name": "static" 00:24:21.302 } 00:24:21.302 } 00:24:21.302 ] 00:24:21.302 }, 00:24:21.302 { 00:24:21.302 "subsystem": "nvmf", 00:24:21.302 "config": [ 00:24:21.302 { 00:24:21.302 "method": "nvmf_set_config", 00:24:21.302 "params": { 00:24:21.302 "discovery_filter": "match_any", 00:24:21.302 "admin_cmd_passthru": { 00:24:21.302 "identify_ctrlr": false 00:24:21.302 }, 00:24:21.302 "dhchap_digests": [ 00:24:21.302 "sha256", 00:24:21.302 "sha384", 00:24:21.302 "sha512" 00:24:21.302 ], 00:24:21.302 "dhchap_dhgroups": [ 00:24:21.302 "null", 00:24:21.302 "ffdhe2048", 00:24:21.302 "ffdhe3072", 00:24:21.302 "ffdhe4096", 00:24:21.302 "ffdhe6144", 00:24:21.302 "ffdhe8192" 00:24:21.302 ] 00:24:21.302 } 00:24:21.302 }, 00:24:21.302 { 00:24:21.302 "method": "nvmf_set_max_subsystems", 00:24:21.302 "params": { 00:24:21.302 "max_subsystems": 1024 00:24:21.302 } 00:24:21.302 }, 00:24:21.302 { 00:24:21.302 "method": "nvmf_set_crdt", 00:24:21.302 "params": { 00:24:21.302 "crdt1": 0, 00:24:21.302 "crdt2": 0, 00:24:21.302 "crdt3": 0 00:24:21.302 } 00:24:21.302 }, 00:24:21.302 { 00:24:21.302 "method": "nvmf_create_transport", 00:24:21.302 "params": { 00:24:21.302 "trtype": "TCP", 00:24:21.302 "max_queue_depth": 128, 00:24:21.302 "max_io_qpairs_per_ctrlr": 127, 00:24:21.302 "in_capsule_data_size": 4096, 00:24:21.302 "max_io_size": 131072, 00:24:21.302 "io_unit_size": 131072, 00:24:21.302 "max_aq_depth": 128, 00:24:21.302 "num_shared_buffers": 511, 00:24:21.302 "buf_cache_size": 4294967295, 00:24:21.302 "dif_insert_or_strip": false, 00:24:21.302 "zcopy": false, 00:24:21.302 "c2h_success": false, 00:24:21.302 "sock_priority": 0, 00:24:21.302 "abort_timeout_sec": 1, 00:24:21.302 "ack_timeout": 0, 00:24:21.302 "data_wr_pool_size": 0 00:24:21.302 } 00:24:21.302 }, 00:24:21.302 { 00:24:21.302 "method": "nvmf_create_subsystem", 00:24:21.302 "params": { 00:24:21.302 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.302 "allow_any_host": false, 00:24:21.302 "serial_number": "00000000000000000000", 00:24:21.302 "model_number": "SPDK bdev Controller", 00:24:21.302 "max_namespaces": 32, 00:24:21.302 "min_cntlid": 1, 00:24:21.302 "max_cntlid": 65519, 00:24:21.302 "ana_reporting": false 00:24:21.302 } 00:24:21.302 }, 00:24:21.302 { 00:24:21.302 "method": "nvmf_subsystem_add_host", 00:24:21.302 "params": { 00:24:21.302 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.302 "host": "nqn.2016-06.io.spdk:host1", 00:24:21.302 "psk": "key0" 00:24:21.302 } 00:24:21.302 }, 00:24:21.302 { 00:24:21.302 "method": "nvmf_subsystem_add_ns", 00:24:21.302 "params": { 00:24:21.302 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.302 "namespace": { 00:24:21.302 "nsid": 1, 00:24:21.302 "bdev_name": "malloc0", 00:24:21.302 "nguid": "CD254559E36749A692EF466E0BA8D7CC", 00:24:21.302 "uuid": "cd254559-e367-49a6-92ef-466e0ba8d7cc", 00:24:21.302 "no_auto_visible": false 00:24:21.302 } 00:24:21.302 } 00:24:21.302 }, 00:24:21.302 { 00:24:21.302 "method": "nvmf_subsystem_add_listener", 00:24:21.302 "params": { 00:24:21.302 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.302 "listen_address": { 00:24:21.302 "trtype": "TCP", 00:24:21.302 "adrfam": "IPv4", 00:24:21.302 "traddr": "10.0.0.2", 00:24:21.302 "trsvcid": "4420" 00:24:21.302 }, 00:24:21.302 "secure_channel": false, 00:24:21.302 "sock_impl": "ssl" 00:24:21.302 } 00:24:21.302 } 00:24:21.302 ] 00:24:21.302 } 00:24:21.302 ] 00:24:21.302 }' 00:24:21.302 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:21.564 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:21.564 "subsystems": [ 00:24:21.564 { 00:24:21.564 "subsystem": "keyring", 00:24:21.564 "config": [ 00:24:21.564 { 00:24:21.564 "method": "keyring_file_add_key", 00:24:21.564 "params": { 00:24:21.564 "name": "key0", 00:24:21.564 "path": "/tmp/tmp.8HJ5SGooEZ" 00:24:21.564 } 00:24:21.564 } 00:24:21.564 ] 00:24:21.564 }, 00:24:21.564 { 00:24:21.564 "subsystem": "iobuf", 00:24:21.564 "config": [ 00:24:21.564 { 00:24:21.564 "method": "iobuf_set_options", 00:24:21.564 "params": { 00:24:21.564 "small_pool_count": 8192, 00:24:21.564 "large_pool_count": 1024, 00:24:21.564 "small_bufsize": 8192, 00:24:21.564 "large_bufsize": 135168, 00:24:21.564 "enable_numa": false 00:24:21.564 } 00:24:21.564 } 00:24:21.564 ] 00:24:21.564 }, 00:24:21.564 { 00:24:21.564 "subsystem": "sock", 00:24:21.564 "config": [ 00:24:21.564 { 00:24:21.564 "method": "sock_set_default_impl", 00:24:21.564 "params": { 00:24:21.564 "impl_name": "posix" 00:24:21.564 } 00:24:21.564 }, 00:24:21.564 { 00:24:21.564 "method": "sock_impl_set_options", 00:24:21.564 "params": { 00:24:21.564 "impl_name": "ssl", 00:24:21.564 "recv_buf_size": 4096, 00:24:21.564 "send_buf_size": 4096, 00:24:21.564 "enable_recv_pipe": true, 00:24:21.564 "enable_quickack": false, 00:24:21.564 "enable_placement_id": 0, 00:24:21.564 "enable_zerocopy_send_server": true, 00:24:21.564 "enable_zerocopy_send_client": false, 00:24:21.564 "zerocopy_threshold": 0, 00:24:21.564 "tls_version": 0, 00:24:21.564 "enable_ktls": false 00:24:21.564 } 00:24:21.564 }, 00:24:21.564 { 00:24:21.564 "method": "sock_impl_set_options", 00:24:21.564 "params": { 00:24:21.564 "impl_name": "posix", 00:24:21.564 "recv_buf_size": 2097152, 00:24:21.564 "send_buf_size": 2097152, 00:24:21.564 "enable_recv_pipe": true, 00:24:21.564 "enable_quickack": false, 00:24:21.564 "enable_placement_id": 0, 00:24:21.564 "enable_zerocopy_send_server": true, 00:24:21.564 "enable_zerocopy_send_client": false, 00:24:21.564 "zerocopy_threshold": 0, 00:24:21.564 "tls_version": 0, 00:24:21.564 "enable_ktls": false 00:24:21.564 } 00:24:21.564 } 00:24:21.564 ] 00:24:21.564 }, 00:24:21.564 { 00:24:21.564 "subsystem": "vmd", 00:24:21.564 "config": [] 00:24:21.564 }, 00:24:21.564 { 00:24:21.564 "subsystem": "accel", 00:24:21.564 "config": [ 00:24:21.564 { 00:24:21.564 "method": "accel_set_options", 00:24:21.564 "params": { 00:24:21.564 "small_cache_size": 128, 00:24:21.564 "large_cache_size": 16, 00:24:21.564 "task_count": 2048, 00:24:21.564 "sequence_count": 2048, 00:24:21.564 "buf_count": 2048 00:24:21.564 } 00:24:21.564 } 00:24:21.564 ] 00:24:21.564 }, 00:24:21.564 { 00:24:21.564 "subsystem": "bdev", 00:24:21.564 "config": [ 00:24:21.564 { 00:24:21.564 "method": "bdev_set_options", 00:24:21.564 "params": { 00:24:21.564 "bdev_io_pool_size": 65535, 00:24:21.564 "bdev_io_cache_size": 256, 00:24:21.564 "bdev_auto_examine": true, 00:24:21.564 "iobuf_small_cache_size": 128, 00:24:21.564 "iobuf_large_cache_size": 16 00:24:21.564 } 00:24:21.564 }, 00:24:21.564 { 00:24:21.564 "method": "bdev_raid_set_options", 00:24:21.564 "params": { 00:24:21.564 "process_window_size_kb": 1024, 00:24:21.564 "process_max_bandwidth_mb_sec": 0 00:24:21.564 } 00:24:21.564 }, 00:24:21.564 { 00:24:21.564 "method": "bdev_iscsi_set_options", 00:24:21.564 "params": { 00:24:21.564 "timeout_sec": 30 00:24:21.564 } 00:24:21.564 }, 00:24:21.564 { 00:24:21.564 "method": "bdev_nvme_set_options", 00:24:21.564 "params": { 00:24:21.564 "action_on_timeout": "none", 00:24:21.564 "timeout_us": 0, 00:24:21.564 "timeout_admin_us": 0, 00:24:21.564 "keep_alive_timeout_ms": 10000, 00:24:21.564 "arbitration_burst": 0, 00:24:21.564 "low_priority_weight": 0, 00:24:21.564 "medium_priority_weight": 0, 00:24:21.564 "high_priority_weight": 0, 00:24:21.564 "nvme_adminq_poll_period_us": 10000, 00:24:21.564 "nvme_ioq_poll_period_us": 0, 00:24:21.564 "io_queue_requests": 512, 00:24:21.564 "delay_cmd_submit": true, 00:24:21.564 "transport_retry_count": 4, 00:24:21.564 "bdev_retry_count": 3, 00:24:21.564 "transport_ack_timeout": 0, 00:24:21.564 "ctrlr_loss_timeout_sec": 0, 00:24:21.564 "reconnect_delay_sec": 0, 00:24:21.564 "fast_io_fail_timeout_sec": 0, 00:24:21.564 "disable_auto_failback": false, 00:24:21.564 "generate_uuids": false, 00:24:21.564 "transport_tos": 0, 00:24:21.564 "nvme_error_stat": false, 00:24:21.564 "rdma_srq_size": 0, 00:24:21.564 "io_path_stat": false, 00:24:21.564 "allow_accel_sequence": false, 00:24:21.564 "rdma_max_cq_size": 0, 00:24:21.564 "rdma_cm_event_timeout_ms": 0, 00:24:21.564 "dhchap_digests": [ 00:24:21.564 "sha256", 00:24:21.564 "sha384", 00:24:21.564 "sha512" 00:24:21.564 ], 00:24:21.564 "dhchap_dhgroups": [ 00:24:21.564 "null", 00:24:21.564 "ffdhe2048", 00:24:21.564 "ffdhe3072", 00:24:21.564 "ffdhe4096", 00:24:21.564 "ffdhe6144", 00:24:21.565 "ffdhe8192" 00:24:21.565 ] 00:24:21.565 } 00:24:21.565 }, 00:24:21.565 { 00:24:21.565 "method": "bdev_nvme_attach_controller", 00:24:21.565 "params": { 00:24:21.565 "name": "nvme0", 00:24:21.565 "trtype": "TCP", 00:24:21.565 "adrfam": "IPv4", 00:24:21.565 "traddr": "10.0.0.2", 00:24:21.565 "trsvcid": "4420", 00:24:21.565 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.565 "prchk_reftag": false, 00:24:21.565 "prchk_guard": false, 00:24:21.565 "ctrlr_loss_timeout_sec": 0, 00:24:21.565 "reconnect_delay_sec": 0, 00:24:21.565 "fast_io_fail_timeout_sec": 0, 00:24:21.565 "psk": "key0", 00:24:21.565 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:21.565 "hdgst": false, 00:24:21.565 "ddgst": false, 00:24:21.565 "multipath": "multipath" 00:24:21.565 } 00:24:21.565 }, 00:24:21.565 { 00:24:21.565 "method": "bdev_nvme_set_hotplug", 00:24:21.565 "params": { 00:24:21.565 "period_us": 100000, 00:24:21.565 "enable": false 00:24:21.565 } 00:24:21.565 }, 00:24:21.565 { 00:24:21.565 "method": "bdev_enable_histogram", 00:24:21.565 "params": { 00:24:21.565 "name": "nvme0n1", 00:24:21.565 "enable": true 00:24:21.565 } 00:24:21.565 }, 00:24:21.565 { 00:24:21.565 "method": "bdev_wait_for_examine" 00:24:21.565 } 00:24:21.565 ] 00:24:21.565 }, 00:24:21.565 { 00:24:21.565 "subsystem": "nbd", 00:24:21.565 "config": [] 00:24:21.565 } 00:24:21.565 ] 00:24:21.565 }' 00:24:21.565 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2420428 00:24:21.565 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2420428 ']' 00:24:21.565 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2420428 00:24:21.565 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:21.565 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:21.565 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2420428 00:24:21.565 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:21.565 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:21.565 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2420428' 00:24:21.565 killing process with pid 2420428 00:24:21.565 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2420428 00:24:21.565 Received shutdown signal, test time was about 1.000000 seconds 00:24:21.565 00:24:21.565 Latency(us) 00:24:21.565 [2024-11-27T06:19:32.770Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.565 [2024-11-27T06:19:32.770Z] =================================================================================================================== 00:24:21.565 [2024-11-27T06:19:32.770Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:21.565 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2420428 00:24:21.565 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2420080 00:24:21.565 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2420080 ']' 00:24:21.565 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2420080 00:24:21.565 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:21.565 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:21.565 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2420080 00:24:21.837 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:21.837 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:21.837 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2420080' 00:24:21.837 killing process with pid 2420080 00:24:21.837 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2420080 00:24:21.837 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2420080 00:24:21.837 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:21.837 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:21.837 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:21.837 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:21.837 "subsystems": [ 00:24:21.837 { 00:24:21.837 "subsystem": "keyring", 00:24:21.837 "config": [ 00:24:21.837 { 00:24:21.837 "method": "keyring_file_add_key", 00:24:21.837 "params": { 00:24:21.837 "name": "key0", 00:24:21.837 "path": "/tmp/tmp.8HJ5SGooEZ" 00:24:21.837 } 00:24:21.837 } 00:24:21.837 ] 00:24:21.837 }, 00:24:21.837 { 00:24:21.837 "subsystem": "iobuf", 00:24:21.837 "config": [ 00:24:21.837 { 00:24:21.837 "method": "iobuf_set_options", 00:24:21.837 "params": { 00:24:21.837 "small_pool_count": 8192, 00:24:21.837 "large_pool_count": 1024, 00:24:21.837 "small_bufsize": 8192, 00:24:21.837 "large_bufsize": 135168, 00:24:21.837 "enable_numa": false 00:24:21.837 } 00:24:21.837 } 00:24:21.837 ] 00:24:21.837 }, 00:24:21.837 { 00:24:21.837 "subsystem": "sock", 00:24:21.837 "config": [ 00:24:21.837 { 00:24:21.837 "method": "sock_set_default_impl", 00:24:21.837 "params": { 00:24:21.837 "impl_name": "posix" 00:24:21.837 } 00:24:21.837 }, 00:24:21.837 { 00:24:21.837 "method": "sock_impl_set_options", 00:24:21.837 "params": { 00:24:21.837 "impl_name": "ssl", 00:24:21.837 "recv_buf_size": 4096, 00:24:21.837 "send_buf_size": 4096, 00:24:21.837 "enable_recv_pipe": true, 00:24:21.837 "enable_quickack": false, 00:24:21.838 "enable_placement_id": 0, 00:24:21.838 "enable_zerocopy_send_server": true, 00:24:21.838 "enable_zerocopy_send_client": false, 00:24:21.838 "zerocopy_threshold": 0, 00:24:21.838 "tls_version": 0, 00:24:21.838 "enable_ktls": false 00:24:21.838 } 00:24:21.838 }, 00:24:21.838 { 00:24:21.838 "method": "sock_impl_set_options", 00:24:21.838 "params": { 00:24:21.838 "impl_name": "posix", 00:24:21.838 "recv_buf_size": 2097152, 00:24:21.838 "send_buf_size": 2097152, 00:24:21.838 "enable_recv_pipe": true, 00:24:21.838 "enable_quickack": false, 00:24:21.838 "enable_placement_id": 0, 00:24:21.838 "enable_zerocopy_send_server": true, 00:24:21.838 "enable_zerocopy_send_client": false, 00:24:21.838 "zerocopy_threshold": 0, 00:24:21.838 "tls_version": 0, 00:24:21.838 "enable_ktls": false 00:24:21.838 } 00:24:21.838 } 00:24:21.838 ] 00:24:21.838 }, 00:24:21.838 { 00:24:21.838 "subsystem": "vmd", 00:24:21.838 "config": [] 00:24:21.838 }, 00:24:21.838 { 00:24:21.838 "subsystem": "accel", 00:24:21.838 "config": [ 00:24:21.838 { 00:24:21.838 "method": "accel_set_options", 00:24:21.838 "params": { 00:24:21.838 "small_cache_size": 128, 00:24:21.838 "large_cache_size": 16, 00:24:21.838 "task_count": 2048, 00:24:21.838 "sequence_count": 2048, 00:24:21.838 "buf_count": 2048 00:24:21.838 } 00:24:21.838 } 00:24:21.838 ] 00:24:21.838 }, 00:24:21.838 { 00:24:21.838 "subsystem": "bdev", 00:24:21.838 "config": [ 00:24:21.838 { 00:24:21.838 "method": "bdev_set_options", 00:24:21.838 "params": { 00:24:21.838 "bdev_io_pool_size": 65535, 00:24:21.838 "bdev_io_cache_size": 256, 00:24:21.838 "bdev_auto_examine": true, 00:24:21.838 "iobuf_small_cache_size": 128, 00:24:21.838 "iobuf_large_cache_size": 16 00:24:21.838 } 00:24:21.838 }, 00:24:21.838 { 00:24:21.838 "method": "bdev_raid_set_options", 00:24:21.838 "params": { 00:24:21.838 "process_window_size_kb": 1024, 00:24:21.838 "process_max_bandwidth_mb_sec": 0 00:24:21.838 } 00:24:21.838 }, 00:24:21.838 { 00:24:21.838 "method": "bdev_iscsi_set_options", 00:24:21.838 "params": { 00:24:21.838 "timeout_sec": 30 00:24:21.838 } 00:24:21.838 }, 00:24:21.838 { 00:24:21.838 "method": "bdev_nvme_set_options", 00:24:21.838 "params": { 00:24:21.838 "action_on_timeout": "none", 00:24:21.838 "timeout_us": 0, 00:24:21.838 "timeout_admin_us": 0, 00:24:21.838 "keep_alive_timeout_ms": 10000, 00:24:21.838 "arbitration_burst": 0, 00:24:21.838 "low_priority_weight": 0, 00:24:21.838 "medium_priority_weight": 0, 00:24:21.838 "high_priority_weight": 0, 00:24:21.838 "nvme_adminq_poll_period_us": 10000, 00:24:21.838 "nvme_ioq_poll_period_us": 0, 00:24:21.838 "io_queue_requests": 0, 00:24:21.838 "delay_cmd_submit": true, 00:24:21.838 "transport_retry_count": 4, 00:24:21.838 "bdev_retry_count": 3, 00:24:21.838 "transport_ack_timeout": 0, 00:24:21.838 "ctrlr_loss_timeout_sec": 0, 00:24:21.838 "reconnect_delay_sec": 0, 00:24:21.838 "fast_io_fail_timeout_sec": 0, 00:24:21.838 "disable_auto_failback": false, 00:24:21.838 "generate_uuids": false, 00:24:21.838 "transport_tos": 0, 00:24:21.838 "nvme_error_stat": false, 00:24:21.838 "rdma_srq_size": 0, 00:24:21.838 "io_path_stat": false, 00:24:21.838 "allow_accel_sequence": false, 00:24:21.838 "rdma_max_cq_size": 0, 00:24:21.838 "rdma_cm_event_timeout_ms": 0, 00:24:21.838 "dhchap_digests": [ 00:24:21.838 "sha256", 00:24:21.838 "sha384", 00:24:21.838 "sha512" 00:24:21.838 ], 00:24:21.838 "dhchap_dhgroups": [ 00:24:21.838 "null", 00:24:21.838 "ffdhe2048", 00:24:21.838 "ffdhe3072", 00:24:21.838 "ffdhe4096", 00:24:21.838 "ffdhe6144", 00:24:21.838 "ffdhe8192" 00:24:21.838 ] 00:24:21.838 } 00:24:21.838 }, 00:24:21.838 { 00:24:21.838 "method": "bdev_nvme_set_hotplug", 00:24:21.838 "params": { 00:24:21.838 "period_us": 100000, 00:24:21.838 "enable": false 00:24:21.838 } 00:24:21.838 }, 00:24:21.838 { 00:24:21.838 "method": "bdev_malloc_create", 00:24:21.838 "params": { 00:24:21.838 "name": "malloc0", 00:24:21.838 "num_blocks": 8192, 00:24:21.838 "block_size": 4096, 00:24:21.838 "physical_block_size": 4096, 00:24:21.838 "uuid": "cd254559-e367-49a6-92ef-466e0ba8d7cc", 00:24:21.838 "optimal_io_boundary": 0, 00:24:21.838 "md_size": 0, 00:24:21.838 "dif_type": 0, 00:24:21.838 "dif_is_head_of_md": false, 00:24:21.838 "dif_pi_format": 0 00:24:21.838 } 00:24:21.838 }, 00:24:21.838 { 00:24:21.838 "method": "bdev_wait_for_examine" 00:24:21.838 } 00:24:21.838 ] 00:24:21.838 }, 00:24:21.838 { 00:24:21.838 "subsystem": "nbd", 00:24:21.838 "config": [] 00:24:21.838 }, 00:24:21.838 { 00:24:21.838 "subsystem": "scheduler", 00:24:21.838 "config": [ 00:24:21.838 { 00:24:21.838 "method": "framework_set_scheduler", 00:24:21.838 "params": { 00:24:21.838 "name": "static" 00:24:21.838 } 00:24:21.838 } 00:24:21.838 ] 00:24:21.838 }, 00:24:21.838 { 00:24:21.838 "subsystem": "nvmf", 00:24:21.838 "config": [ 00:24:21.838 { 00:24:21.838 "method": "nvmf_set_config", 00:24:21.838 "params": { 00:24:21.838 "discovery_filter": "match_any", 00:24:21.838 "admin_cmd_passthru": { 00:24:21.838 "identify_ctrlr": false 00:24:21.838 }, 00:24:21.838 "dhchap_digests": [ 00:24:21.838 "sha256", 00:24:21.838 "sha384", 00:24:21.838 "sha512" 00:24:21.838 ], 00:24:21.838 "dhchap_dhgroups": [ 00:24:21.838 "null", 00:24:21.838 "ffdhe2048", 00:24:21.838 "ffdhe3072", 00:24:21.838 "ffdhe4096", 00:24:21.838 "ffdhe6144", 00:24:21.838 "ffdhe8192" 00:24:21.838 ] 00:24:21.838 } 00:24:21.838 }, 00:24:21.838 { 00:24:21.838 "method": "nvmf_set_max_subsystems", 00:24:21.838 "params": { 00:24:21.838 "max_subsystems": 1024 00:24:21.838 } 00:24:21.838 }, 00:24:21.838 { 00:24:21.838 "method": "nvmf_set_crdt", 00:24:21.838 "params": { 00:24:21.838 "crdt1": 0, 00:24:21.838 "crdt2": 0, 00:24:21.838 "crdt3": 0 00:24:21.838 } 00:24:21.838 }, 00:24:21.838 { 00:24:21.838 "method": "nvmf_create_transport", 00:24:21.838 "params": { 00:24:21.838 "trtype": "TCP", 00:24:21.838 "max_queue_depth": 128, 00:24:21.838 "max_io_qpairs_per_ctrlr": 127, 00:24:21.838 "in_capsule_data_size": 4096, 00:24:21.838 "max_io_size": 131072, 00:24:21.838 "io_unit_size": 131072, 00:24:21.838 "max_aq_depth": 128, 00:24:21.838 "num_shared_buffers": 511, 00:24:21.838 "buf_cache_size": 4294967295, 00:24:21.838 "dif_insert_or_strip": false, 00:24:21.838 "zcopy": false, 00:24:21.838 "c2h_success": false, 00:24:21.838 "sock_priority": 0, 00:24:21.838 "abort_timeout_sec": 1, 00:24:21.838 "ack_timeout": 0, 00:24:21.838 "data_wr_pool_size": 0 00:24:21.838 } 00:24:21.838 }, 00:24:21.838 { 00:24:21.838 "method": "nvmf_create_subsystem", 00:24:21.838 "params": { 00:24:21.838 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.838 "allow_any_host": false, 00:24:21.838 "serial_number": "00000000000000000000", 00:24:21.838 "model_number": "SPDK bdev Controller", 00:24:21.838 "max_namespaces": 32, 00:24:21.838 "min_cntlid": 1, 00:24:21.838 "max_cntlid": 65519, 00:24:21.838 "ana_reporting": false 00:24:21.838 } 00:24:21.838 }, 00:24:21.838 { 00:24:21.838 "method": "nvmf_subsystem_add_host", 00:24:21.838 "params": { 00:24:21.838 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.838 "host": "nqn.2016-06.io.spdk:host1", 00:24:21.838 "psk": "key0" 00:24:21.838 } 00:24:21.838 }, 00:24:21.838 { 00:24:21.838 "method": "nvmf_subsystem_add_ns", 00:24:21.838 "params": { 00:24:21.838 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.838 "namespace": { 00:24:21.838 "nsid": 1, 00:24:21.838 "bdev_name": "malloc0", 00:24:21.838 "nguid": "CD254559E36749A692EF466E0BA8D7CC", 00:24:21.838 "uuid": "cd254559-e367-49a6-92ef-466e0ba8d7cc", 00:24:21.838 "no_auto_visible": false 00:24:21.838 } 00:24:21.838 } 00:24:21.838 }, 00:24:21.838 { 00:24:21.838 "method": "nvmf_subsystem_add_listener", 00:24:21.838 "params": { 00:24:21.838 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.838 "listen_address": { 00:24:21.838 "trtype": "TCP", 00:24:21.838 "adrfam": "IPv4", 00:24:21.838 "traddr": "10.0.0.2", 00:24:21.838 "trsvcid": "4420" 00:24:21.838 }, 00:24:21.838 "secure_channel": false, 00:24:21.838 "sock_impl": "ssl" 00:24:21.838 } 00:24:21.838 } 00:24:21.838 ] 00:24:21.838 } 00:24:21.838 ] 00:24:21.838 }' 00:24:21.838 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:21.838 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:21.838 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2420795 00:24:21.838 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2420795 00:24:21.838 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2420795 ']' 00:24:21.838 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.838 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:21.838 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.838 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:21.838 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:21.838 [2024-11-27 07:19:32.949661] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:24:21.838 [2024-11-27 07:19:32.949704] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:21.838 [2024-11-27 07:19:33.003490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.838 [2024-11-27 07:19:33.032429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:21.838 [2024-11-27 07:19:33.032457] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:21.838 [2024-11-27 07:19:33.032462] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:21.838 [2024-11-27 07:19:33.032466] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:21.838 [2024-11-27 07:19:33.032471] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:21.838 [2024-11-27 07:19:33.032916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.099 [2024-11-27 07:19:33.227069] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:22.099 [2024-11-27 07:19:33.259102] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:22.099 [2024-11-27 07:19:33.259308] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:22.671 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:22.671 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:22.671 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:22.671 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:22.671 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:22.671 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:22.671 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2421119 00:24:22.671 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2421119 /var/tmp/bdevperf.sock 00:24:22.671 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2421119 ']' 00:24:22.671 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:22.671 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:22.671 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:22.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:22.671 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:22.671 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:22.671 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:22.671 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:22.671 "subsystems": [ 00:24:22.671 { 00:24:22.671 "subsystem": "keyring", 00:24:22.671 "config": [ 00:24:22.671 { 00:24:22.671 "method": "keyring_file_add_key", 00:24:22.671 "params": { 00:24:22.671 "name": "key0", 00:24:22.672 "path": "/tmp/tmp.8HJ5SGooEZ" 00:24:22.672 } 00:24:22.672 } 00:24:22.672 ] 00:24:22.672 }, 00:24:22.672 { 00:24:22.672 "subsystem": "iobuf", 00:24:22.672 "config": [ 00:24:22.672 { 00:24:22.672 "method": "iobuf_set_options", 00:24:22.672 "params": { 00:24:22.672 "small_pool_count": 8192, 00:24:22.672 "large_pool_count": 1024, 00:24:22.672 "small_bufsize": 8192, 00:24:22.672 "large_bufsize": 135168, 00:24:22.672 "enable_numa": false 00:24:22.672 } 00:24:22.672 } 00:24:22.672 ] 00:24:22.672 }, 00:24:22.672 { 00:24:22.672 "subsystem": "sock", 00:24:22.672 "config": [ 00:24:22.672 { 00:24:22.672 "method": "sock_set_default_impl", 00:24:22.672 "params": { 00:24:22.672 "impl_name": "posix" 00:24:22.672 } 00:24:22.672 }, 00:24:22.672 { 00:24:22.672 "method": "sock_impl_set_options", 00:24:22.672 "params": { 00:24:22.672 "impl_name": "ssl", 00:24:22.672 "recv_buf_size": 4096, 00:24:22.672 "send_buf_size": 4096, 00:24:22.672 "enable_recv_pipe": true, 00:24:22.672 "enable_quickack": false, 00:24:22.672 "enable_placement_id": 0, 00:24:22.672 "enable_zerocopy_send_server": true, 00:24:22.672 "enable_zerocopy_send_client": false, 00:24:22.672 "zerocopy_threshold": 0, 00:24:22.672 "tls_version": 0, 00:24:22.672 "enable_ktls": false 00:24:22.672 } 00:24:22.672 }, 00:24:22.672 { 00:24:22.672 "method": "sock_impl_set_options", 00:24:22.672 "params": { 00:24:22.672 "impl_name": "posix", 00:24:22.672 "recv_buf_size": 2097152, 00:24:22.672 "send_buf_size": 2097152, 00:24:22.672 "enable_recv_pipe": true, 00:24:22.672 "enable_quickack": false, 00:24:22.672 "enable_placement_id": 0, 00:24:22.672 "enable_zerocopy_send_server": true, 00:24:22.672 "enable_zerocopy_send_client": false, 00:24:22.672 "zerocopy_threshold": 0, 00:24:22.672 "tls_version": 0, 00:24:22.672 "enable_ktls": false 00:24:22.672 } 00:24:22.672 } 00:24:22.672 ] 00:24:22.672 }, 00:24:22.672 { 00:24:22.672 "subsystem": "vmd", 00:24:22.672 "config": [] 00:24:22.672 }, 00:24:22.672 { 00:24:22.672 "subsystem": "accel", 00:24:22.672 "config": [ 00:24:22.672 { 00:24:22.672 "method": "accel_set_options", 00:24:22.672 "params": { 00:24:22.672 "small_cache_size": 128, 00:24:22.672 "large_cache_size": 16, 00:24:22.672 "task_count": 2048, 00:24:22.672 "sequence_count": 2048, 00:24:22.672 "buf_count": 2048 00:24:22.672 } 00:24:22.672 } 00:24:22.672 ] 00:24:22.672 }, 00:24:22.672 { 00:24:22.672 "subsystem": "bdev", 00:24:22.672 "config": [ 00:24:22.672 { 00:24:22.672 "method": "bdev_set_options", 00:24:22.672 "params": { 00:24:22.672 "bdev_io_pool_size": 65535, 00:24:22.672 "bdev_io_cache_size": 256, 00:24:22.672 "bdev_auto_examine": true, 00:24:22.672 "iobuf_small_cache_size": 128, 00:24:22.672 "iobuf_large_cache_size": 16 00:24:22.672 } 00:24:22.672 }, 00:24:22.672 { 00:24:22.672 "method": "bdev_raid_set_options", 00:24:22.672 "params": { 00:24:22.672 "process_window_size_kb": 1024, 00:24:22.672 "process_max_bandwidth_mb_sec": 0 00:24:22.672 } 00:24:22.672 }, 00:24:22.672 { 00:24:22.672 "method": "bdev_iscsi_set_options", 00:24:22.672 "params": { 00:24:22.672 "timeout_sec": 30 00:24:22.672 } 00:24:22.672 }, 00:24:22.672 { 00:24:22.672 "method": "bdev_nvme_set_options", 00:24:22.672 "params": { 00:24:22.672 "action_on_timeout": "none", 00:24:22.672 "timeout_us": 0, 00:24:22.672 "timeout_admin_us": 0, 00:24:22.672 "keep_alive_timeout_ms": 10000, 00:24:22.672 "arbitration_burst": 0, 00:24:22.672 "low_priority_weight": 0, 00:24:22.672 "medium_priority_weight": 0, 00:24:22.672 "high_priority_weight": 0, 00:24:22.672 "nvme_adminq_poll_period_us": 10000, 00:24:22.672 "nvme_ioq_poll_period_us": 0, 00:24:22.672 "io_queue_requests": 512, 00:24:22.672 "delay_cmd_submit": true, 00:24:22.672 "transport_retry_count": 4, 00:24:22.672 "bdev_retry_count": 3, 00:24:22.672 "transport_ack_timeout": 0, 00:24:22.672 "ctrlr_loss_timeout_sec": 0, 00:24:22.672 "reconnect_delay_sec": 0, 00:24:22.672 "fast_io_fail_timeout_sec": 0, 00:24:22.672 "disable_auto_failback": false, 00:24:22.672 "generate_uuids": false, 00:24:22.672 "transport_tos": 0, 00:24:22.672 "nvme_error_stat": false, 00:24:22.672 "rdma_srq_size": 0, 00:24:22.672 "io_path_stat": false, 00:24:22.672 "allow_accel_sequence": false, 00:24:22.672 "rdma_max_cq_size": 0, 00:24:22.672 "rdma_cm_event_timeout_ms": 0, 00:24:22.672 "dhchap_digests": [ 00:24:22.672 "sha256", 00:24:22.672 "sha384", 00:24:22.672 "sha512" 00:24:22.672 ], 00:24:22.672 "dhchap_dhgroups": [ 00:24:22.672 "null", 00:24:22.672 "ffdhe2048", 00:24:22.672 "ffdhe3072", 00:24:22.672 "ffdhe4096", 00:24:22.672 "ffdhe6144", 00:24:22.672 "ffdhe8192" 00:24:22.672 ] 00:24:22.672 } 00:24:22.672 }, 00:24:22.672 { 00:24:22.672 "method": "bdev_nvme_attach_controller", 00:24:22.672 "params": { 00:24:22.672 "name": "nvme0", 00:24:22.672 "trtype": "TCP", 00:24:22.672 "adrfam": "IPv4", 00:24:22.672 "traddr": "10.0.0.2", 00:24:22.672 "trsvcid": "4420", 00:24:22.672 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:22.672 "prchk_reftag": false, 00:24:22.672 "prchk_guard": false, 00:24:22.672 "ctrlr_loss_timeout_sec": 0, 00:24:22.672 "reconnect_delay_sec": 0, 00:24:22.672 "fast_io_fail_timeout_sec": 0, 00:24:22.672 "psk": "key0", 00:24:22.672 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:22.672 "hdgst": false, 00:24:22.672 "ddgst": false, 00:24:22.672 "multipath": "multipath" 00:24:22.672 } 00:24:22.672 }, 00:24:22.672 { 00:24:22.672 "method": "bdev_nvme_set_hotplug", 00:24:22.672 "params": { 00:24:22.672 "period_us": 100000, 00:24:22.672 "enable": false 00:24:22.672 } 00:24:22.672 }, 00:24:22.672 { 00:24:22.672 "method": "bdev_enable_histogram", 00:24:22.672 "params": { 00:24:22.672 "name": "nvme0n1", 00:24:22.672 "enable": true 00:24:22.672 } 00:24:22.672 }, 00:24:22.672 { 00:24:22.672 "method": "bdev_wait_for_examine" 00:24:22.672 } 00:24:22.672 ] 00:24:22.672 }, 00:24:22.672 { 00:24:22.672 "subsystem": "nbd", 00:24:22.672 "config": [] 00:24:22.672 } 00:24:22.672 ] 00:24:22.672 }' 00:24:22.672 [2024-11-27 07:19:33.849748] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:24:22.672 [2024-11-27 07:19:33.849801] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2421119 ] 00:24:22.933 [2024-11-27 07:19:33.932937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.933 [2024-11-27 07:19:33.962685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:22.933 [2024-11-27 07:19:34.098676] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:23.503 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:23.503 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:23.503 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:23.503 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:23.764 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.764 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:23.764 Running I/O for 1 seconds... 00:24:24.708 5805.00 IOPS, 22.68 MiB/s 00:24:24.708 Latency(us) 00:24:24.708 [2024-11-27T06:19:35.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.708 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:24.708 Verification LBA range: start 0x0 length 0x2000 00:24:24.708 nvme0n1 : 1.02 5843.58 22.83 0.00 0.00 21757.72 4833.28 24903.68 00:24:24.708 [2024-11-27T06:19:35.913Z] =================================================================================================================== 00:24:24.708 [2024-11-27T06:19:35.913Z] Total : 5843.58 22.83 0.00 0.00 21757.72 4833.28 24903.68 00:24:24.708 { 00:24:24.708 "results": [ 00:24:24.708 { 00:24:24.708 "job": "nvme0n1", 00:24:24.708 "core_mask": "0x2", 00:24:24.708 "workload": "verify", 00:24:24.708 "status": "finished", 00:24:24.708 "verify_range": { 00:24:24.708 "start": 0, 00:24:24.708 "length": 8192 00:24:24.708 }, 00:24:24.708 "queue_depth": 128, 00:24:24.708 "io_size": 4096, 00:24:24.708 "runtime": 1.015303, 00:24:24.708 "iops": 5843.5757601425385, 00:24:24.708 "mibps": 22.82646781305679, 00:24:24.708 "io_failed": 0, 00:24:24.708 "io_timeout": 0, 00:24:24.708 "avg_latency_us": 21757.715431204, 00:24:24.708 "min_latency_us": 4833.28, 00:24:24.708 "max_latency_us": 24903.68 00:24:24.708 } 00:24:24.708 ], 00:24:24.708 "core_count": 1 00:24:24.708 } 00:24:24.968 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:24.968 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:24.968 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:24.968 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:24:24.968 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:24:24.968 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:24.968 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:24.968 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:24.968 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:24.968 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:24.968 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:24.968 nvmf_trace.0 00:24:24.969 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:24:24.969 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2421119 00:24:24.969 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2421119 ']' 00:24:24.969 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2421119 00:24:24.969 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:24.969 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:24.969 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2421119 00:24:24.969 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:24.969 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:24.969 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2421119' 00:24:24.969 killing process with pid 2421119 00:24:24.969 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2421119 00:24:24.969 Received shutdown signal, test time was about 1.000000 seconds 00:24:24.969 00:24:24.969 Latency(us) 00:24:24.969 [2024-11-27T06:19:36.174Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.969 [2024-11-27T06:19:36.174Z] =================================================================================================================== 00:24:24.969 [2024-11-27T06:19:36.174Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:24.969 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2421119 00:24:25.230 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:25.230 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:25.230 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:25.230 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:25.230 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:25.230 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:25.230 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:25.230 rmmod nvme_tcp 00:24:25.230 rmmod nvme_fabrics 00:24:25.230 rmmod nvme_keyring 00:24:25.230 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:25.230 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:25.230 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:25.230 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2420795 ']' 00:24:25.230 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2420795 00:24:25.230 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2420795 ']' 00:24:25.230 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2420795 00:24:25.230 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:25.230 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:25.230 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2420795 00:24:25.230 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:25.230 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:25.230 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2420795' 00:24:25.230 killing process with pid 2420795 00:24:25.230 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2420795 00:24:25.230 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2420795 00:24:25.491 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:25.491 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:25.491 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:25.491 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:25.491 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:24:25.491 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:25.491 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:24:25.491 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:25.491 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:25.491 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.491 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:25.491 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.404 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:27.404 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.pTCyFrwfCs /tmp/tmp.J5wOeTAn61 /tmp/tmp.8HJ5SGooEZ 00:24:27.404 00:24:27.404 real 1m27.438s 00:24:27.405 user 2m18.560s 00:24:27.405 sys 0m26.603s 00:24:27.405 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:27.405 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:27.405 ************************************ 00:24:27.405 END TEST nvmf_tls 00:24:27.405 ************************************ 00:24:27.405 07:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:27.405 07:19:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:27.405 07:19:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:27.405 07:19:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:27.666 ************************************ 00:24:27.666 START TEST nvmf_fips 00:24:27.667 ************************************ 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:27.667 * Looking for test storage... 00:24:27.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:27.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.667 --rc genhtml_branch_coverage=1 00:24:27.667 --rc genhtml_function_coverage=1 00:24:27.667 --rc genhtml_legend=1 00:24:27.667 --rc geninfo_all_blocks=1 00:24:27.667 --rc geninfo_unexecuted_blocks=1 00:24:27.667 00:24:27.667 ' 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:27.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.667 --rc genhtml_branch_coverage=1 00:24:27.667 --rc genhtml_function_coverage=1 00:24:27.667 --rc genhtml_legend=1 00:24:27.667 --rc geninfo_all_blocks=1 00:24:27.667 --rc geninfo_unexecuted_blocks=1 00:24:27.667 00:24:27.667 ' 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:27.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.667 --rc genhtml_branch_coverage=1 00:24:27.667 --rc genhtml_function_coverage=1 00:24:27.667 --rc genhtml_legend=1 00:24:27.667 --rc geninfo_all_blocks=1 00:24:27.667 --rc geninfo_unexecuted_blocks=1 00:24:27.667 00:24:27.667 ' 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:27.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.667 --rc genhtml_branch_coverage=1 00:24:27.667 --rc genhtml_function_coverage=1 00:24:27.667 --rc genhtml_legend=1 00:24:27.667 --rc geninfo_all_blocks=1 00:24:27.667 --rc geninfo_unexecuted_blocks=1 00:24:27.667 00:24:27.667 ' 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:27.667 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.668 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.668 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.668 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:27.668 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.668 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:27.668 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:27.668 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:27.668 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:27.668 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:27.668 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:27.668 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:27.668 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:27.668 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:27.668 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:27.668 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:27.668 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:27.668 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:27.668 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:27.668 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:27.668 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:27.929 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:27.929 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:27.929 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:27.929 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:27.929 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:27.929 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:27.929 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:27.929 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:27.929 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:27.929 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:27.929 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:27.929 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:27.929 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:27.929 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:27.929 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:27.929 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:27.929 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:27.929 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:27.929 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:27.929 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:27.929 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:27.929 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:27.929 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:24:27.930 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:24:27.930 Error setting digest 00:24:27.930 40A2404B9A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:27.930 40A2404B9A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:27.930 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:24:27.930 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:27.930 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:27.930 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:27.930 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:27.930 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:27.930 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:27.930 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:27.930 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:27.930 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:27.930 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.930 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:27.930 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.930 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:27.930 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:27.930 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:27.930 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:36.084 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:36.084 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:36.084 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:36.085 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:36.085 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:36.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:36.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.541 ms 00:24:36.085 00:24:36.085 --- 10.0.0.2 ping statistics --- 00:24:36.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.085 rtt min/avg/max/mdev = 0.541/0.541/0.541/0.000 ms 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:36.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:36.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:24:36.085 00:24:36.085 --- 10.0.0.1 ping statistics --- 00:24:36.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.085 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2425844 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2425844 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2425844 ']' 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:36.085 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:36.085 [2024-11-27 07:19:46.668812] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:24:36.085 [2024-11-27 07:19:46.668886] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:36.085 [2024-11-27 07:19:46.767463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.085 [2024-11-27 07:19:46.817212] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:36.085 [2024-11-27 07:19:46.817259] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:36.085 [2024-11-27 07:19:46.817267] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:36.085 [2024-11-27 07:19:46.817274] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:36.086 [2024-11-27 07:19:46.817280] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:36.086 [2024-11-27 07:19:46.818015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:36.347 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:36.347 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:36.347 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:36.347 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:36.347 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:36.347 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:36.347 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:36.347 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:36.347 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:36.347 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.CqY 00:24:36.347 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:36.347 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.CqY 00:24:36.347 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.CqY 00:24:36.347 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.CqY 00:24:36.347 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:36.608 [2024-11-27 07:19:47.677795] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:36.608 [2024-11-27 07:19:47.693793] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:36.608 [2024-11-27 07:19:47.694099] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:36.608 malloc0 00:24:36.608 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:36.608 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2426011 00:24:36.608 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2426011 /var/tmp/bdevperf.sock 00:24:36.608 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:36.608 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2426011 ']' 00:24:36.608 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:36.608 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:36.608 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:36.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:36.608 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:36.608 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:36.869 [2024-11-27 07:19:47.839569] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:24:36.869 [2024-11-27 07:19:47.839647] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2426011 ] 00:24:36.869 [2024-11-27 07:19:47.936550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.869 [2024-11-27 07:19:47.987583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:37.812 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:37.812 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:37.812 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.CqY 00:24:37.812 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:37.812 [2024-11-27 07:19:49.007427] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:38.076 TLSTESTn1 00:24:38.076 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:38.076 Running I/O for 10 seconds... 00:24:40.416 3036.00 IOPS, 11.86 MiB/s [2024-11-27T06:19:52.565Z] 3672.50 IOPS, 14.35 MiB/s [2024-11-27T06:19:53.508Z] 4475.67 IOPS, 17.48 MiB/s [2024-11-27T06:19:54.450Z] 4913.25 IOPS, 19.19 MiB/s [2024-11-27T06:19:55.394Z] 4942.40 IOPS, 19.31 MiB/s [2024-11-27T06:19:56.339Z] 5009.17 IOPS, 19.57 MiB/s [2024-11-27T06:19:57.281Z] 5213.86 IOPS, 20.37 MiB/s [2024-11-27T06:19:58.810Z] 5185.12 IOPS, 20.25 MiB/s [2024-11-27T06:19:59.493Z] 5280.00 IOPS, 20.62 MiB/s [2024-11-27T06:19:59.493Z] 5375.00 IOPS, 21.00 MiB/s 00:24:48.288 Latency(us) 00:24:48.288 [2024-11-27T06:19:59.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.288 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:48.288 Verification LBA range: start 0x0 length 0x2000 00:24:48.288 TLSTESTn1 : 10.01 5381.22 21.02 0.00 0.00 23752.70 4396.37 56797.87 00:24:48.288 [2024-11-27T06:19:59.493Z] =================================================================================================================== 00:24:48.288 [2024-11-27T06:19:59.493Z] Total : 5381.22 21.02 0.00 0.00 23752.70 4396.37 56797.87 00:24:48.288 { 00:24:48.288 "results": [ 00:24:48.288 { 00:24:48.288 "job": "TLSTESTn1", 00:24:48.288 "core_mask": "0x4", 00:24:48.288 "workload": "verify", 00:24:48.288 "status": "finished", 00:24:48.288 "verify_range": { 00:24:48.288 "start": 0, 00:24:48.288 "length": 8192 00:24:48.288 }, 00:24:48.288 "queue_depth": 128, 00:24:48.288 "io_size": 4096, 00:24:48.288 "runtime": 10.01167, 00:24:48.288 "iops": 5381.220116124483, 00:24:48.288 "mibps": 21.02039107861126, 00:24:48.288 "io_failed": 0, 00:24:48.288 "io_timeout": 0, 00:24:48.288 "avg_latency_us": 23752.695003372002, 00:24:48.288 "min_latency_us": 4396.373333333333, 00:24:48.288 "max_latency_us": 56797.86666666667 00:24:48.288 } 00:24:48.288 ], 00:24:48.288 "core_count": 1 00:24:48.288 } 00:24:48.288 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:48.288 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:48.288 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:24:48.288 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:24:48.288 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:48.288 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:48.288 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:48.288 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:48.288 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:48.288 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:48.288 nvmf_trace.0 00:24:48.288 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:24:48.288 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2426011 00:24:48.288 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2426011 ']' 00:24:48.288 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2426011 00:24:48.288 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:48.288 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:48.288 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2426011 00:24:48.288 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:48.288 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:48.288 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2426011' 00:24:48.288 killing process with pid 2426011 00:24:48.288 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2426011 00:24:48.288 Received shutdown signal, test time was about 10.000000 seconds 00:24:48.288 00:24:48.288 Latency(us) 00:24:48.288 [2024-11-27T06:19:59.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.288 [2024-11-27T06:19:59.493Z] =================================================================================================================== 00:24:48.288 [2024-11-27T06:19:59.493Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:48.288 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2426011 00:24:48.549 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:48.549 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:48.549 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:48.549 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:48.549 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:48.549 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:48.549 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:48.549 rmmod nvme_tcp 00:24:48.549 rmmod nvme_fabrics 00:24:48.549 rmmod nvme_keyring 00:24:48.549 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:48.549 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:48.549 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:48.549 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2425844 ']' 00:24:48.549 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2425844 00:24:48.549 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2425844 ']' 00:24:48.549 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2425844 00:24:48.549 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:48.549 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:48.549 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2425844 00:24:48.549 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:48.549 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:48.549 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2425844' 00:24:48.549 killing process with pid 2425844 00:24:48.549 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2425844 00:24:48.549 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2425844 00:24:48.810 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:48.810 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:48.810 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:48.810 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:48.810 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:24:48.810 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:48.810 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:24:48.810 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:48.810 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:48.810 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.810 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:48.810 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.724 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:50.724 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.CqY 00:24:50.724 00:24:50.724 real 0m23.251s 00:24:50.724 user 0m24.998s 00:24:50.724 sys 0m9.620s 00:24:50.724 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:50.724 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:50.724 ************************************ 00:24:50.724 END TEST nvmf_fips 00:24:50.724 ************************************ 00:24:50.724 07:20:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:50.724 07:20:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:50.724 07:20:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:50.724 07:20:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:50.986 ************************************ 00:24:50.986 START TEST nvmf_control_msg_list 00:24:50.986 ************************************ 00:24:50.986 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:50.986 * Looking for test storage... 00:24:50.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:50.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.986 --rc genhtml_branch_coverage=1 00:24:50.986 --rc genhtml_function_coverage=1 00:24:50.986 --rc genhtml_legend=1 00:24:50.986 --rc geninfo_all_blocks=1 00:24:50.986 --rc geninfo_unexecuted_blocks=1 00:24:50.986 00:24:50.986 ' 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:50.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.986 --rc genhtml_branch_coverage=1 00:24:50.986 --rc genhtml_function_coverage=1 00:24:50.986 --rc genhtml_legend=1 00:24:50.986 --rc geninfo_all_blocks=1 00:24:50.986 --rc geninfo_unexecuted_blocks=1 00:24:50.986 00:24:50.986 ' 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:50.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.986 --rc genhtml_branch_coverage=1 00:24:50.986 --rc genhtml_function_coverage=1 00:24:50.986 --rc genhtml_legend=1 00:24:50.986 --rc geninfo_all_blocks=1 00:24:50.986 --rc geninfo_unexecuted_blocks=1 00:24:50.986 00:24:50.986 ' 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:50.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.986 --rc genhtml_branch_coverage=1 00:24:50.986 --rc genhtml_function_coverage=1 00:24:50.986 --rc genhtml_legend=1 00:24:50.986 --rc geninfo_all_blocks=1 00:24:50.986 --rc geninfo_unexecuted_blocks=1 00:24:50.986 00:24:50.986 ' 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:50.986 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:50.987 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:50.987 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:50.987 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:50.987 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:50.987 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:50.987 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:50.987 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:50.987 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:50.987 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:51.248 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:51.248 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:51.248 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:51.248 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:51.248 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:51.248 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:51.248 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.248 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:51.248 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.248 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:51.248 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:51.248 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:51.248 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:59.392 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:59.392 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:59.392 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:59.392 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:59.392 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:59.392 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:59.392 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:59.392 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:59.392 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:59.392 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:59.392 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:59.392 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:59.392 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:59.392 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:59.392 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:59.392 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:59.392 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:59.392 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:59.392 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:59.393 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:59.393 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:59.393 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:59.393 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:59.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:59.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:24:59.393 00:24:59.393 --- 10.0.0.2 ping statistics --- 00:24:59.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.393 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:59.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:59.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:24:59.393 00:24:59.393 --- 10.0.0.1 ping statistics --- 00:24:59.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.393 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2432558 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2432558 00:24:59.393 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:59.394 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2432558 ']' 00:24:59.394 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:59.394 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:59.394 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:59.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:59.394 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:59.394 07:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:59.394 [2024-11-27 07:20:09.804355] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:24:59.394 [2024-11-27 07:20:09.804420] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:59.394 [2024-11-27 07:20:09.903880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.394 [2024-11-27 07:20:09.954360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:59.394 [2024-11-27 07:20:09.954411] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:59.394 [2024-11-27 07:20:09.954419] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:59.394 [2024-11-27 07:20:09.954431] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:59.394 [2024-11-27 07:20:09.954438] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:59.394 [2024-11-27 07:20:09.955250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.656 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:59.656 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:24:59.656 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:59.656 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:59.656 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:59.656 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:59.656 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:59.656 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:59.656 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:59.656 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.656 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:59.656 [2024-11-27 07:20:10.670647] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:59.656 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.656 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:59.656 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.656 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:59.656 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.656 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:59.656 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.656 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:59.656 Malloc0 00:24:59.656 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.656 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:59.656 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.656 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:59.656 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.656 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:59.656 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.656 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:59.656 [2024-11-27 07:20:10.725102] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:59.656 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.656 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2432648 00:24:59.656 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:59.656 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2432650 00:24:59.656 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:59.656 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2432652 00:24:59.656 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2432648 00:24:59.656 07:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:59.656 [2024-11-27 07:20:10.836033] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:59.656 [2024-11-27 07:20:10.836435] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:59.656 [2024-11-27 07:20:10.836734] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:01.044 Initializing NVMe Controllers 00:25:01.044 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:01.044 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:25:01.044 Initialization complete. Launching workers. 00:25:01.044 ======================================================== 00:25:01.044 Latency(us) 00:25:01.044 Device Information : IOPS MiB/s Average min max 00:25:01.044 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 1503.00 5.87 665.12 226.33 954.15 00:25:01.044 ======================================================== 00:25:01.044 Total : 1503.00 5.87 665.12 226.33 954.15 00:25:01.044 00:25:01.044 Initializing NVMe Controllers 00:25:01.044 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:01.044 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:25:01.044 Initialization complete. Launching workers. 00:25:01.044 ======================================================== 00:25:01.044 Latency(us) 00:25:01.044 Device Information : IOPS MiB/s Average min max 00:25:01.044 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40911.35 40854.42 41093.41 00:25:01.044 ======================================================== 00:25:01.044 Total : 25.00 0.10 40911.35 40854.42 41093.41 00:25:01.044 00:25:01.044 [2024-11-27 07:20:12.052137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94e60 is same with the state(6) to be set 00:25:01.044 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2432650 00:25:01.044 Initializing NVMe Controllers 00:25:01.044 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:01.044 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:25:01.044 Initialization complete. Launching workers. 00:25:01.044 ======================================================== 00:25:01.044 Latency(us) 00:25:01.044 Device Information : IOPS MiB/s Average min max 00:25:01.044 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 2071.00 8.09 482.79 158.66 749.00 00:25:01.044 ======================================================== 00:25:01.044 Total : 2071.00 8.09 482.79 158.66 749.00 00:25:01.044 00:25:01.044 [2024-11-27 07:20:12.110190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95800 is same with the state(6) to be set 00:25:01.044 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2432652 00:25:01.044 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:01.044 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:25:01.044 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:01.044 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:25:01.044 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:01.044 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:25:01.044 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:01.044 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:01.044 rmmod nvme_tcp 00:25:01.044 rmmod nvme_fabrics 00:25:01.044 rmmod nvme_keyring 00:25:01.044 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:01.044 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:25:01.044 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:25:01.044 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2432558 ']' 00:25:01.044 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2432558 00:25:01.044 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2432558 ']' 00:25:01.044 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2432558 00:25:01.044 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:25:01.044 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:01.044 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2432558 00:25:01.304 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:01.304 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:01.304 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2432558' 00:25:01.304 killing process with pid 2432558 00:25:01.304 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2432558 00:25:01.304 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2432558 00:25:01.304 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:01.304 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:01.304 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:01.304 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:25:01.304 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:25:01.304 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:01.304 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:25:01.304 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:01.304 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:01.304 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.304 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:01.304 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:03.846 00:25:03.846 real 0m12.569s 00:25:03.846 user 0m8.221s 00:25:03.846 sys 0m6.760s 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:03.846 ************************************ 00:25:03.846 END TEST nvmf_control_msg_list 00:25:03.846 ************************************ 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:03.846 ************************************ 00:25:03.846 START TEST nvmf_wait_for_buf 00:25:03.846 ************************************ 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:03.846 * Looking for test storage... 00:25:03.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:03.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.846 --rc genhtml_branch_coverage=1 00:25:03.846 --rc genhtml_function_coverage=1 00:25:03.846 --rc genhtml_legend=1 00:25:03.846 --rc geninfo_all_blocks=1 00:25:03.846 --rc geninfo_unexecuted_blocks=1 00:25:03.846 00:25:03.846 ' 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:03.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.846 --rc genhtml_branch_coverage=1 00:25:03.846 --rc genhtml_function_coverage=1 00:25:03.846 --rc genhtml_legend=1 00:25:03.846 --rc geninfo_all_blocks=1 00:25:03.846 --rc geninfo_unexecuted_blocks=1 00:25:03.846 00:25:03.846 ' 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:03.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.846 --rc genhtml_branch_coverage=1 00:25:03.846 --rc genhtml_function_coverage=1 00:25:03.846 --rc genhtml_legend=1 00:25:03.846 --rc geninfo_all_blocks=1 00:25:03.846 --rc geninfo_unexecuted_blocks=1 00:25:03.846 00:25:03.846 ' 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:03.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.846 --rc genhtml_branch_coverage=1 00:25:03.846 --rc genhtml_function_coverage=1 00:25:03.846 --rc genhtml_legend=1 00:25:03.846 --rc geninfo_all_blocks=1 00:25:03.846 --rc geninfo_unexecuted_blocks=1 00:25:03.846 00:25:03.846 ' 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.846 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.847 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.847 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.847 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:25:03.847 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.847 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:25:03.847 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:03.847 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:03.847 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:03.847 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.847 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.847 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:03.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:03.847 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:03.847 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:03.847 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:03.847 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:25:03.847 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:03.847 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:03.847 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:03.847 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:03.847 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:03.847 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.847 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.847 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.847 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:03.847 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:03.847 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:03.847 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:11.996 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:11.996 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:11.996 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.996 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:11.997 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:11.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:11.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:25:11.997 00:25:11.997 --- 10.0.0.2 ping statistics --- 00:25:11.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.997 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:11.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:11.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:25:11.997 00:25:11.997 --- 10.0.0.1 ping statistics --- 00:25:11.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.997 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2437255 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2437255 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2437255 ']' 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:11.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:11.997 [2024-11-27 07:20:22.512783] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:25:11.997 [2024-11-27 07:20:22.512852] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:11.997 [2024-11-27 07:20:22.588161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.997 [2024-11-27 07:20:22.633676] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:11.997 [2024-11-27 07:20:22.633728] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:11.997 [2024-11-27 07:20:22.633735] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:11.997 [2024-11-27 07:20:22.633741] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:11.997 [2024-11-27 07:20:22.633746] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:11.997 [2024-11-27 07:20:22.634428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.997 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:11.998 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.998 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:11.998 Malloc0 00:25:11.998 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.998 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:11.998 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.998 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:11.998 [2024-11-27 07:20:22.847077] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:11.998 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.998 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:11.998 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.998 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:11.998 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.998 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:11.998 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.998 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:11.998 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.998 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:11.998 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.998 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:11.998 [2024-11-27 07:20:22.883420] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:11.998 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.998 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:11.998 [2024-11-27 07:20:22.982256] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:13.382 Initializing NVMe Controllers 00:25:13.382 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:13.382 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:25:13.382 Initialization complete. Launching workers. 00:25:13.383 ======================================================== 00:25:13.383 Latency(us) 00:25:13.383 Device Information : IOPS MiB/s Average min max 00:25:13.383 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 117.57 14.70 35237.64 8004.48 78815.82 00:25:13.383 ======================================================== 00:25:13.383 Total : 117.57 14.70 35237.64 8004.48 78815.82 00:25:13.383 00:25:13.383 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:25:13.383 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:25:13.383 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.383 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:13.383 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.383 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1862 00:25:13.383 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1862 -eq 0 ]] 00:25:13.383 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:13.383 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:25:13.383 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:13.383 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:25:13.383 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:13.383 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:25:13.383 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:13.383 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:13.383 rmmod nvme_tcp 00:25:13.644 rmmod nvme_fabrics 00:25:13.644 rmmod nvme_keyring 00:25:13.644 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:13.644 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:25:13.644 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:25:13.644 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2437255 ']' 00:25:13.644 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2437255 00:25:13.644 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2437255 ']' 00:25:13.644 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2437255 00:25:13.644 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:25:13.644 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:13.644 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2437255 00:25:13.644 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:13.644 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:13.644 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2437255' 00:25:13.644 killing process with pid 2437255 00:25:13.644 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2437255 00:25:13.644 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2437255 00:25:13.644 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:13.644 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:13.644 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:13.644 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:25:13.644 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:25:13.644 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:13.644 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:25:13.905 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:13.905 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:13.905 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.905 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:13.905 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.819 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:15.819 00:25:15.819 real 0m12.318s 00:25:15.819 user 0m4.520s 00:25:15.819 sys 0m6.266s 00:25:15.819 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:15.819 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:15.819 ************************************ 00:25:15.819 END TEST nvmf_wait_for_buf 00:25:15.819 ************************************ 00:25:15.819 07:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:25:15.819 07:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:25:15.819 07:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:25:15.819 07:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:25:15.819 07:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:25:15.819 07:20:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:23.962 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:23.963 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:23.963 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:23.963 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:23.963 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:23.963 ************************************ 00:25:23.963 START TEST nvmf_perf_adq 00:25:23.963 ************************************ 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:23.963 * Looking for test storage... 00:25:23.963 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:23.963 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:23.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.964 --rc genhtml_branch_coverage=1 00:25:23.964 --rc genhtml_function_coverage=1 00:25:23.964 --rc genhtml_legend=1 00:25:23.964 --rc geninfo_all_blocks=1 00:25:23.964 --rc geninfo_unexecuted_blocks=1 00:25:23.964 00:25:23.964 ' 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:23.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.964 --rc genhtml_branch_coverage=1 00:25:23.964 --rc genhtml_function_coverage=1 00:25:23.964 --rc genhtml_legend=1 00:25:23.964 --rc geninfo_all_blocks=1 00:25:23.964 --rc geninfo_unexecuted_blocks=1 00:25:23.964 00:25:23.964 ' 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:23.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.964 --rc genhtml_branch_coverage=1 00:25:23.964 --rc genhtml_function_coverage=1 00:25:23.964 --rc genhtml_legend=1 00:25:23.964 --rc geninfo_all_blocks=1 00:25:23.964 --rc geninfo_unexecuted_blocks=1 00:25:23.964 00:25:23.964 ' 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:23.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.964 --rc genhtml_branch_coverage=1 00:25:23.964 --rc genhtml_function_coverage=1 00:25:23.964 --rc genhtml_legend=1 00:25:23.964 --rc geninfo_all_blocks=1 00:25:23.964 --rc geninfo_unexecuted_blocks=1 00:25:23.964 00:25:23.964 ' 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:23.964 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:25:23.964 07:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:30.548 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:30.548 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:25:30.548 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:30.548 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:30.548 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:30.548 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:30.548 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:30.548 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:25:30.548 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:30.548 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:25:30.548 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:25:30.548 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:25:30.548 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:25:30.548 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:25:30.548 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:25:30.548 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:30.549 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:30.549 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:30.549 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:30.549 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:25:30.549 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:25:31.934 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:25:34.480 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:39.785 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:39.785 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:39.785 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:39.785 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:39.785 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:39.786 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:39.786 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:39.786 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:39.786 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:39.786 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:39.786 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:39.786 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:39.786 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:39.786 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:39.786 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:39.786 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:39.786 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:39.786 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:25:39.786 00:25:39.786 --- 10.0.0.2 ping statistics --- 00:25:39.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.786 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:25:39.786 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:39.786 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:39.786 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:25:39.786 00:25:39.786 --- 10.0.0.1 ping statistics --- 00:25:39.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.786 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:25:39.786 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:39.786 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:25:39.786 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:39.786 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:39.786 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:39.786 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:39.786 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:39.786 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:39.786 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:39.786 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:39.786 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:39.786 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:39.786 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:39.786 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2447303 00:25:39.786 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2447303 00:25:39.786 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2447303 ']' 00:25:39.786 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:39.786 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:39.786 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:39.786 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:39.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:39.786 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:39.786 07:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:39.786 [2024-11-27 07:20:50.654454] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:25:39.786 [2024-11-27 07:20:50.654530] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:39.786 [2024-11-27 07:20:50.756353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:39.786 [2024-11-27 07:20:50.811486] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:39.786 [2024-11-27 07:20:50.811538] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:39.786 [2024-11-27 07:20:50.811547] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:39.786 [2024-11-27 07:20:50.811554] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:39.786 [2024-11-27 07:20:50.811560] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:39.786 [2024-11-27 07:20:50.813921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:39.786 [2024-11-27 07:20:50.814078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:39.786 [2024-11-27 07:20:50.814244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:39.786 [2024-11-27 07:20:50.814244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.358 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:40.358 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:25:40.358 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:40.358 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:40.358 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:40.619 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:40.619 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:25:40.619 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:25:40.619 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:25:40.619 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.619 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:40.619 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.619 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:25:40.619 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:25:40.619 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.619 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:40.619 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.619 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:25:40.619 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.619 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:40.619 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.619 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:25:40.619 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.619 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:40.619 [2024-11-27 07:20:51.720493] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:40.619 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.620 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:40.620 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.620 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:40.620 Malloc1 00:25:40.620 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.620 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:40.620 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.620 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:40.620 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.620 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:40.620 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.620 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:40.620 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.620 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:40.620 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.620 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:40.620 [2024-11-27 07:20:51.802301] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:40.620 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.620 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2447518 00:25:40.620 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:25:40.620 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:43.169 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:25:43.169 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.169 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:43.169 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.169 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:25:43.169 "tick_rate": 2400000000, 00:25:43.169 "poll_groups": [ 00:25:43.169 { 00:25:43.169 "name": "nvmf_tgt_poll_group_000", 00:25:43.169 "admin_qpairs": 1, 00:25:43.169 "io_qpairs": 1, 00:25:43.169 "current_admin_qpairs": 1, 00:25:43.169 "current_io_qpairs": 1, 00:25:43.169 "pending_bdev_io": 0, 00:25:43.169 "completed_nvme_io": 15796, 00:25:43.169 "transports": [ 00:25:43.169 { 00:25:43.169 "trtype": "TCP" 00:25:43.169 } 00:25:43.169 ] 00:25:43.169 }, 00:25:43.169 { 00:25:43.169 "name": "nvmf_tgt_poll_group_001", 00:25:43.169 "admin_qpairs": 0, 00:25:43.169 "io_qpairs": 1, 00:25:43.169 "current_admin_qpairs": 0, 00:25:43.169 "current_io_qpairs": 1, 00:25:43.169 "pending_bdev_io": 0, 00:25:43.169 "completed_nvme_io": 16275, 00:25:43.169 "transports": [ 00:25:43.169 { 00:25:43.169 "trtype": "TCP" 00:25:43.169 } 00:25:43.169 ] 00:25:43.169 }, 00:25:43.169 { 00:25:43.169 "name": "nvmf_tgt_poll_group_002", 00:25:43.169 "admin_qpairs": 0, 00:25:43.169 "io_qpairs": 1, 00:25:43.169 "current_admin_qpairs": 0, 00:25:43.169 "current_io_qpairs": 1, 00:25:43.169 "pending_bdev_io": 0, 00:25:43.169 "completed_nvme_io": 16880, 00:25:43.169 "transports": [ 00:25:43.169 { 00:25:43.169 "trtype": "TCP" 00:25:43.169 } 00:25:43.169 ] 00:25:43.169 }, 00:25:43.169 { 00:25:43.169 "name": "nvmf_tgt_poll_group_003", 00:25:43.169 "admin_qpairs": 0, 00:25:43.169 "io_qpairs": 1, 00:25:43.169 "current_admin_qpairs": 0, 00:25:43.169 "current_io_qpairs": 1, 00:25:43.169 "pending_bdev_io": 0, 00:25:43.169 "completed_nvme_io": 16017, 00:25:43.169 "transports": [ 00:25:43.169 { 00:25:43.169 "trtype": "TCP" 00:25:43.169 } 00:25:43.169 ] 00:25:43.169 } 00:25:43.169 ] 00:25:43.169 }' 00:25:43.169 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:25:43.169 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:25:43.169 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:25:43.169 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:25:43.169 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2447518 00:25:51.407 Initializing NVMe Controllers 00:25:51.407 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:51.407 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:51.407 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:51.407 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:51.407 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:51.407 Initialization complete. Launching workers. 00:25:51.407 ======================================================== 00:25:51.407 Latency(us) 00:25:51.407 Device Information : IOPS MiB/s Average min max 00:25:51.407 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12385.20 48.38 5167.19 1229.75 14147.78 00:25:51.407 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13176.90 51.47 4857.31 1362.10 12701.79 00:25:51.407 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13212.20 51.61 4843.96 1169.55 12443.77 00:25:51.407 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12756.40 49.83 5027.38 1091.86 45399.49 00:25:51.407 ======================================================== 00:25:51.407 Total : 51530.68 201.29 4970.47 1091.86 45399.49 00:25:51.407 00:25:51.407 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:25:51.407 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:51.407 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:25:51.407 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:51.407 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:25:51.407 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:51.407 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:51.407 rmmod nvme_tcp 00:25:51.407 rmmod nvme_fabrics 00:25:51.407 rmmod nvme_keyring 00:25:51.407 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:51.407 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:25:51.407 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:25:51.407 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2447303 ']' 00:25:51.407 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2447303 00:25:51.407 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2447303 ']' 00:25:51.407 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2447303 00:25:51.407 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:25:51.407 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:51.407 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2447303 00:25:51.407 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:51.407 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:51.407 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2447303' 00:25:51.407 killing process with pid 2447303 00:25:51.407 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2447303 00:25:51.407 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2447303 00:25:51.407 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:51.407 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:51.407 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:51.407 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:25:51.407 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:25:51.407 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:51.407 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:25:51.407 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:51.407 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:51.407 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.407 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:51.407 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.322 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:53.322 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:25:53.322 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:25:53.322 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:25:55.236 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:25:57.148 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:26:02.447 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:26:02.447 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:02.447 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:02.447 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:02.447 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:02.447 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:02.447 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.447 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:02.447 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.447 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:02.447 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:02.447 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:26:02.447 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:02.447 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:02.447 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:26:02.447 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:02.447 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:02.447 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:02.447 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:02.447 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:02.447 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:26:02.447 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:02.447 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:26:02.447 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:26:02.447 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:26:02.447 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:26:02.447 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:26:02.447 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:26:02.447 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:02.447 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:02.447 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:02.447 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:02.447 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:02.448 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:02.448 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:02.448 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:02.448 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:02.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:02.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:26:02.448 00:26:02.448 --- 10.0.0.2 ping statistics --- 00:26:02.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.448 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:02.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:02.448 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:26:02.448 00:26:02.448 --- 10.0.0.1 ping statistics --- 00:26:02.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.448 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:26:02.448 net.core.busy_poll = 1 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:26:02.448 net.core.busy_read = 1 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:26:02.448 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:26:02.711 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:26:02.711 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:26:02.711 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:26:02.711 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:02.711 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:02.711 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:02.711 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:02.711 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2452785 00:26:02.711 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2452785 00:26:02.711 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:02.711 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2452785 ']' 00:26:02.711 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:02.711 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:02.711 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:02.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:02.711 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:02.711 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:02.972 [2024-11-27 07:21:13.917353] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:26:02.972 [2024-11-27 07:21:13.917421] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:02.972 [2024-11-27 07:21:14.019283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:02.972 [2024-11-27 07:21:14.072065] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:02.972 [2024-11-27 07:21:14.072126] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:02.972 [2024-11-27 07:21:14.072135] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:02.972 [2024-11-27 07:21:14.072143] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:02.972 [2024-11-27 07:21:14.072149] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:02.972 [2024-11-27 07:21:14.074220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:02.972 [2024-11-27 07:21:14.074497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.972 [2024-11-27 07:21:14.074333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:02.972 [2024-11-27 07:21:14.074496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:03.544 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:03.544 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:26:03.544 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:03.544 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:03.544 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.805 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:03.805 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:26:03.805 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:03.805 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:03.805 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.805 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.805 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.805 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:03.805 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:26:03.805 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.805 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.805 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.805 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:03.805 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.806 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.806 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.806 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:26:03.806 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.806 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.806 [2024-11-27 07:21:14.940011] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:03.806 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.806 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:03.806 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.806 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.806 Malloc1 00:26:03.806 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.806 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:03.806 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.806 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.806 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.806 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:03.806 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.806 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.806 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.806 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:03.806 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.806 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:04.067 [2024-11-27 07:21:15.013989] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:04.067 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.067 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2452907 00:26:04.067 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:26:04.067 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:05.983 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:26:05.983 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.983 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:05.983 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.983 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:26:05.983 "tick_rate": 2400000000, 00:26:05.983 "poll_groups": [ 00:26:05.983 { 00:26:05.983 "name": "nvmf_tgt_poll_group_000", 00:26:05.983 "admin_qpairs": 1, 00:26:05.983 "io_qpairs": 4, 00:26:05.983 "current_admin_qpairs": 1, 00:26:05.983 "current_io_qpairs": 4, 00:26:05.983 "pending_bdev_io": 0, 00:26:05.983 "completed_nvme_io": 34156, 00:26:05.983 "transports": [ 00:26:05.983 { 00:26:05.983 "trtype": "TCP" 00:26:05.983 } 00:26:05.983 ] 00:26:05.983 }, 00:26:05.983 { 00:26:05.983 "name": "nvmf_tgt_poll_group_001", 00:26:05.983 "admin_qpairs": 0, 00:26:05.983 "io_qpairs": 0, 00:26:05.983 "current_admin_qpairs": 0, 00:26:05.983 "current_io_qpairs": 0, 00:26:05.983 "pending_bdev_io": 0, 00:26:05.983 "completed_nvme_io": 0, 00:26:05.983 "transports": [ 00:26:05.983 { 00:26:05.983 "trtype": "TCP" 00:26:05.983 } 00:26:05.983 ] 00:26:05.983 }, 00:26:05.983 { 00:26:05.983 "name": "nvmf_tgt_poll_group_002", 00:26:05.983 "admin_qpairs": 0, 00:26:05.983 "io_qpairs": 0, 00:26:05.983 "current_admin_qpairs": 0, 00:26:05.983 "current_io_qpairs": 0, 00:26:05.983 "pending_bdev_io": 0, 00:26:05.983 "completed_nvme_io": 0, 00:26:05.983 "transports": [ 00:26:05.983 { 00:26:05.983 "trtype": "TCP" 00:26:05.983 } 00:26:05.983 ] 00:26:05.983 }, 00:26:05.983 { 00:26:05.983 "name": "nvmf_tgt_poll_group_003", 00:26:05.983 "admin_qpairs": 0, 00:26:05.983 "io_qpairs": 0, 00:26:05.983 "current_admin_qpairs": 0, 00:26:05.983 "current_io_qpairs": 0, 00:26:05.983 "pending_bdev_io": 0, 00:26:05.983 "completed_nvme_io": 0, 00:26:05.983 "transports": [ 00:26:05.983 { 00:26:05.983 "trtype": "TCP" 00:26:05.983 } 00:26:05.983 ] 00:26:05.983 } 00:26:05.983 ] 00:26:05.983 }' 00:26:05.983 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:26:05.983 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:26:05.983 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:26:05.983 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:26:05.983 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2452907 00:26:14.128 Initializing NVMe Controllers 00:26:14.128 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:14.128 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:14.128 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:14.128 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:14.128 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:14.128 Initialization complete. Launching workers. 00:26:14.128 ======================================================== 00:26:14.128 Latency(us) 00:26:14.128 Device Information : IOPS MiB/s Average min max 00:26:14.128 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5919.40 23.12 10836.49 1125.59 61355.07 00:26:14.128 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6045.30 23.61 10618.52 1315.97 57710.98 00:26:14.128 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6851.60 26.76 9369.98 923.78 57411.90 00:26:14.128 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6098.90 23.82 10526.50 1158.16 55911.37 00:26:14.128 ======================================================== 00:26:14.128 Total : 24915.19 97.32 10304.44 923.78 61355.07 00:26:14.128 00:26:14.128 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:26:14.128 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:14.128 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:26:14.128 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:14.128 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:26:14.128 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:14.128 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:14.128 rmmod nvme_tcp 00:26:14.128 rmmod nvme_fabrics 00:26:14.128 rmmod nvme_keyring 00:26:14.128 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:14.128 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:26:14.128 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:26:14.128 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2452785 ']' 00:26:14.128 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2452785 00:26:14.128 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2452785 ']' 00:26:14.128 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2452785 00:26:14.128 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:26:14.128 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:14.128 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2452785 00:26:14.388 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:14.389 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:14.389 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2452785' 00:26:14.389 killing process with pid 2452785 00:26:14.389 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2452785 00:26:14.389 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2452785 00:26:14.389 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:14.389 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:14.389 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:14.389 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:26:14.389 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:26:14.389 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:14.389 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:26:14.389 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:14.389 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:14.389 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:14.389 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:14.389 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:26:17.694 00:26:17.694 real 0m54.383s 00:26:17.694 user 2m51.097s 00:26:17.694 sys 0m11.186s 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:17.694 ************************************ 00:26:17.694 END TEST nvmf_perf_adq 00:26:17.694 ************************************ 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:17.694 ************************************ 00:26:17.694 START TEST nvmf_shutdown 00:26:17.694 ************************************ 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:17.694 * Looking for test storage... 00:26:17.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:17.694 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:17.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.694 --rc genhtml_branch_coverage=1 00:26:17.695 --rc genhtml_function_coverage=1 00:26:17.695 --rc genhtml_legend=1 00:26:17.695 --rc geninfo_all_blocks=1 00:26:17.695 --rc geninfo_unexecuted_blocks=1 00:26:17.695 00:26:17.695 ' 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:17.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.695 --rc genhtml_branch_coverage=1 00:26:17.695 --rc genhtml_function_coverage=1 00:26:17.695 --rc genhtml_legend=1 00:26:17.695 --rc geninfo_all_blocks=1 00:26:17.695 --rc geninfo_unexecuted_blocks=1 00:26:17.695 00:26:17.695 ' 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:17.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.695 --rc genhtml_branch_coverage=1 00:26:17.695 --rc genhtml_function_coverage=1 00:26:17.695 --rc genhtml_legend=1 00:26:17.695 --rc geninfo_all_blocks=1 00:26:17.695 --rc geninfo_unexecuted_blocks=1 00:26:17.695 00:26:17.695 ' 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:17.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.695 --rc genhtml_branch_coverage=1 00:26:17.695 --rc genhtml_function_coverage=1 00:26:17.695 --rc genhtml_legend=1 00:26:17.695 --rc geninfo_all_blocks=1 00:26:17.695 --rc geninfo_unexecuted_blocks=1 00:26:17.695 00:26:17.695 ' 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:17.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:17.695 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:17.956 ************************************ 00:26:17.956 START TEST nvmf_shutdown_tc1 00:26:17.956 ************************************ 00:26:17.956 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:26:17.957 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:26:17.957 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:17.957 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:17.957 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:17.957 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:17.957 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:17.957 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:17.957 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:17.957 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:17.957 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:17.957 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:17.957 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:17.957 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:17.957 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:26.102 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:26.102 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:26.102 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:26.102 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:26.102 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:26.102 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:26.102 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:26.102 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:26:26.102 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:26.102 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:26:26.102 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:26:26.102 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:26:26.102 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:26:26.102 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:26:26.102 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:26.102 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:26.102 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:26.102 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:26.102 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:26.102 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:26.102 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:26.102 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:26.102 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:26.102 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:26.102 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:26.102 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:26.102 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:26.102 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:26.102 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:26.103 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:26.103 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:26.103 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:26.103 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:26.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:26.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:26:26.103 00:26:26.103 --- 10.0.0.2 ping statistics --- 00:26:26.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.103 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:26.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:26.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:26:26.103 00:26:26.103 --- 10.0.0.1 ping statistics --- 00:26:26.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.103 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2459428 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2459428 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2459428 ']' 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:26.103 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:26.104 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:26.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:26.104 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:26.104 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:26.104 [2024-11-27 07:21:36.606083] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:26:26.104 [2024-11-27 07:21:36.606153] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:26.104 [2024-11-27 07:21:36.708483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:26.104 [2024-11-27 07:21:36.760530] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:26.104 [2024-11-27 07:21:36.760580] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:26.104 [2024-11-27 07:21:36.760589] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:26.104 [2024-11-27 07:21:36.760596] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:26.104 [2024-11-27 07:21:36.760602] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:26.104 [2024-11-27 07:21:36.762836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:26.104 [2024-11-27 07:21:36.763009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:26.104 [2024-11-27 07:21:36.763198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:26.104 [2024-11-27 07:21:36.763198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:26.366 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:26.366 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:26:26.366 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:26.366 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:26.366 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:26.366 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:26.366 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:26.366 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.366 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:26.366 [2024-11-27 07:21:37.476854] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:26.366 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.366 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:26.366 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:26.366 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:26.366 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:26.366 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:26.366 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.366 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:26.366 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.366 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:26.366 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.366 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:26.366 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.366 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:26.366 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.366 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:26.366 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.366 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:26.366 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.366 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:26.366 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.366 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:26.366 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.366 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:26.366 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.366 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:26.366 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:26.366 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.366 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:26.627 Malloc1 00:26:26.627 [2024-11-27 07:21:37.610596] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:26.627 Malloc2 00:26:26.627 Malloc3 00:26:26.627 Malloc4 00:26:26.627 Malloc5 00:26:26.627 Malloc6 00:26:26.890 Malloc7 00:26:26.890 Malloc8 00:26:26.890 Malloc9 00:26:26.890 Malloc10 00:26:26.890 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.890 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:26.890 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:26.890 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:26.890 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2459759 00:26:26.890 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2459759 /var/tmp/bdevperf.sock 00:26:26.890 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2459759 ']' 00:26:26.890 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:26.890 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:26.890 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:26.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:26.890 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:26:26.890 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:26.890 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:26.890 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:26.890 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:26:26.890 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:26:26.890 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:26.890 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:26.890 { 00:26:26.890 "params": { 00:26:26.890 "name": "Nvme$subsystem", 00:26:26.890 "trtype": "$TEST_TRANSPORT", 00:26:26.890 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.890 "adrfam": "ipv4", 00:26:26.890 "trsvcid": "$NVMF_PORT", 00:26:26.890 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.890 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.890 "hdgst": ${hdgst:-false}, 00:26:26.890 "ddgst": ${ddgst:-false} 00:26:26.890 }, 00:26:26.890 "method": "bdev_nvme_attach_controller" 00:26:26.890 } 00:26:26.890 EOF 00:26:26.890 )") 00:26:27.152 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:27.152 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:27.152 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:27.152 { 00:26:27.152 "params": { 00:26:27.152 "name": "Nvme$subsystem", 00:26:27.152 "trtype": "$TEST_TRANSPORT", 00:26:27.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.152 "adrfam": "ipv4", 00:26:27.152 "trsvcid": "$NVMF_PORT", 00:26:27.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.152 "hdgst": ${hdgst:-false}, 00:26:27.152 "ddgst": ${ddgst:-false} 00:26:27.152 }, 00:26:27.152 "method": "bdev_nvme_attach_controller" 00:26:27.152 } 00:26:27.152 EOF 00:26:27.152 )") 00:26:27.152 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:27.152 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:27.152 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:27.152 { 00:26:27.152 "params": { 00:26:27.152 "name": "Nvme$subsystem", 00:26:27.152 "trtype": "$TEST_TRANSPORT", 00:26:27.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.152 "adrfam": "ipv4", 00:26:27.152 "trsvcid": "$NVMF_PORT", 00:26:27.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.152 "hdgst": ${hdgst:-false}, 00:26:27.152 "ddgst": ${ddgst:-false} 00:26:27.152 }, 00:26:27.152 "method": "bdev_nvme_attach_controller" 00:26:27.152 } 00:26:27.152 EOF 00:26:27.152 )") 00:26:27.152 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:27.152 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:27.152 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:27.152 { 00:26:27.152 "params": { 00:26:27.152 "name": "Nvme$subsystem", 00:26:27.152 "trtype": "$TEST_TRANSPORT", 00:26:27.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.152 "adrfam": "ipv4", 00:26:27.152 "trsvcid": "$NVMF_PORT", 00:26:27.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.152 "hdgst": ${hdgst:-false}, 00:26:27.152 "ddgst": ${ddgst:-false} 00:26:27.152 }, 00:26:27.152 "method": "bdev_nvme_attach_controller" 00:26:27.152 } 00:26:27.152 EOF 00:26:27.152 )") 00:26:27.152 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:27.152 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:27.152 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:27.152 { 00:26:27.152 "params": { 00:26:27.152 "name": "Nvme$subsystem", 00:26:27.152 "trtype": "$TEST_TRANSPORT", 00:26:27.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.152 "adrfam": "ipv4", 00:26:27.152 "trsvcid": "$NVMF_PORT", 00:26:27.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.152 "hdgst": ${hdgst:-false}, 00:26:27.152 "ddgst": ${ddgst:-false} 00:26:27.152 }, 00:26:27.152 "method": "bdev_nvme_attach_controller" 00:26:27.152 } 00:26:27.152 EOF 00:26:27.152 )") 00:26:27.152 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:27.152 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:27.152 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:27.152 { 00:26:27.152 "params": { 00:26:27.152 "name": "Nvme$subsystem", 00:26:27.152 "trtype": "$TEST_TRANSPORT", 00:26:27.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.152 "adrfam": "ipv4", 00:26:27.152 "trsvcid": "$NVMF_PORT", 00:26:27.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.152 "hdgst": ${hdgst:-false}, 00:26:27.152 "ddgst": ${ddgst:-false} 00:26:27.152 }, 00:26:27.152 "method": "bdev_nvme_attach_controller" 00:26:27.152 } 00:26:27.152 EOF 00:26:27.152 )") 00:26:27.152 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:27.152 [2024-11-27 07:21:38.140389] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:26:27.152 [2024-11-27 07:21:38.140464] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:26:27.152 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:27.153 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:27.153 { 00:26:27.153 "params": { 00:26:27.153 "name": "Nvme$subsystem", 00:26:27.153 "trtype": "$TEST_TRANSPORT", 00:26:27.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.153 "adrfam": "ipv4", 00:26:27.153 "trsvcid": "$NVMF_PORT", 00:26:27.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.153 "hdgst": ${hdgst:-false}, 00:26:27.153 "ddgst": ${ddgst:-false} 00:26:27.153 }, 00:26:27.153 "method": "bdev_nvme_attach_controller" 00:26:27.153 } 00:26:27.153 EOF 00:26:27.153 )") 00:26:27.153 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:27.153 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:27.153 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:27.153 { 00:26:27.153 "params": { 00:26:27.153 "name": "Nvme$subsystem", 00:26:27.153 "trtype": "$TEST_TRANSPORT", 00:26:27.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.153 "adrfam": "ipv4", 00:26:27.153 "trsvcid": "$NVMF_PORT", 00:26:27.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.153 "hdgst": ${hdgst:-false}, 00:26:27.153 "ddgst": ${ddgst:-false} 00:26:27.153 }, 00:26:27.153 "method": "bdev_nvme_attach_controller" 00:26:27.153 } 00:26:27.153 EOF 00:26:27.153 )") 00:26:27.153 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:27.153 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:27.153 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:27.153 { 00:26:27.153 "params": { 00:26:27.153 "name": "Nvme$subsystem", 00:26:27.153 "trtype": "$TEST_TRANSPORT", 00:26:27.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.153 "adrfam": "ipv4", 00:26:27.153 "trsvcid": "$NVMF_PORT", 00:26:27.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.153 "hdgst": ${hdgst:-false}, 00:26:27.153 "ddgst": ${ddgst:-false} 00:26:27.153 }, 00:26:27.153 "method": "bdev_nvme_attach_controller" 00:26:27.153 } 00:26:27.153 EOF 00:26:27.153 )") 00:26:27.153 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:27.153 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:27.153 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:27.153 { 00:26:27.153 "params": { 00:26:27.153 "name": "Nvme$subsystem", 00:26:27.153 "trtype": "$TEST_TRANSPORT", 00:26:27.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.153 "adrfam": "ipv4", 00:26:27.153 "trsvcid": "$NVMF_PORT", 00:26:27.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.153 "hdgst": ${hdgst:-false}, 00:26:27.153 "ddgst": ${ddgst:-false} 00:26:27.153 }, 00:26:27.153 "method": "bdev_nvme_attach_controller" 00:26:27.153 } 00:26:27.153 EOF 00:26:27.153 )") 00:26:27.153 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:27.153 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:26:27.153 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:26:27.153 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:27.153 "params": { 00:26:27.153 "name": "Nvme1", 00:26:27.153 "trtype": "tcp", 00:26:27.153 "traddr": "10.0.0.2", 00:26:27.153 "adrfam": "ipv4", 00:26:27.153 "trsvcid": "4420", 00:26:27.153 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:27.153 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:27.153 "hdgst": false, 00:26:27.153 "ddgst": false 00:26:27.153 }, 00:26:27.153 "method": "bdev_nvme_attach_controller" 00:26:27.153 },{ 00:26:27.153 "params": { 00:26:27.153 "name": "Nvme2", 00:26:27.153 "trtype": "tcp", 00:26:27.153 "traddr": "10.0.0.2", 00:26:27.153 "adrfam": "ipv4", 00:26:27.153 "trsvcid": "4420", 00:26:27.153 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:27.153 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:27.153 "hdgst": false, 00:26:27.153 "ddgst": false 00:26:27.153 }, 00:26:27.153 "method": "bdev_nvme_attach_controller" 00:26:27.153 },{ 00:26:27.153 "params": { 00:26:27.153 "name": "Nvme3", 00:26:27.153 "trtype": "tcp", 00:26:27.153 "traddr": "10.0.0.2", 00:26:27.153 "adrfam": "ipv4", 00:26:27.153 "trsvcid": "4420", 00:26:27.153 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:27.153 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:27.153 "hdgst": false, 00:26:27.153 "ddgst": false 00:26:27.153 }, 00:26:27.153 "method": "bdev_nvme_attach_controller" 00:26:27.153 },{ 00:26:27.153 "params": { 00:26:27.153 "name": "Nvme4", 00:26:27.153 "trtype": "tcp", 00:26:27.153 "traddr": "10.0.0.2", 00:26:27.153 "adrfam": "ipv4", 00:26:27.153 "trsvcid": "4420", 00:26:27.153 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:27.153 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:27.153 "hdgst": false, 00:26:27.153 "ddgst": false 00:26:27.153 }, 00:26:27.153 "method": "bdev_nvme_attach_controller" 00:26:27.153 },{ 00:26:27.153 "params": { 00:26:27.153 "name": "Nvme5", 00:26:27.153 "trtype": "tcp", 00:26:27.153 "traddr": "10.0.0.2", 00:26:27.153 "adrfam": "ipv4", 00:26:27.153 "trsvcid": "4420", 00:26:27.153 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:27.153 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:27.153 "hdgst": false, 00:26:27.153 "ddgst": false 00:26:27.153 }, 00:26:27.153 "method": "bdev_nvme_attach_controller" 00:26:27.153 },{ 00:26:27.153 "params": { 00:26:27.153 "name": "Nvme6", 00:26:27.153 "trtype": "tcp", 00:26:27.153 "traddr": "10.0.0.2", 00:26:27.153 "adrfam": "ipv4", 00:26:27.153 "trsvcid": "4420", 00:26:27.153 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:27.153 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:27.153 "hdgst": false, 00:26:27.153 "ddgst": false 00:26:27.153 }, 00:26:27.153 "method": "bdev_nvme_attach_controller" 00:26:27.153 },{ 00:26:27.153 "params": { 00:26:27.153 "name": "Nvme7", 00:26:27.153 "trtype": "tcp", 00:26:27.153 "traddr": "10.0.0.2", 00:26:27.153 "adrfam": "ipv4", 00:26:27.153 "trsvcid": "4420", 00:26:27.153 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:27.153 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:27.153 "hdgst": false, 00:26:27.153 "ddgst": false 00:26:27.153 }, 00:26:27.153 "method": "bdev_nvme_attach_controller" 00:26:27.153 },{ 00:26:27.153 "params": { 00:26:27.153 "name": "Nvme8", 00:26:27.153 "trtype": "tcp", 00:26:27.153 "traddr": "10.0.0.2", 00:26:27.153 "adrfam": "ipv4", 00:26:27.153 "trsvcid": "4420", 00:26:27.153 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:27.153 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:27.153 "hdgst": false, 00:26:27.153 "ddgst": false 00:26:27.153 }, 00:26:27.153 "method": "bdev_nvme_attach_controller" 00:26:27.153 },{ 00:26:27.153 "params": { 00:26:27.153 "name": "Nvme9", 00:26:27.153 "trtype": "tcp", 00:26:27.153 "traddr": "10.0.0.2", 00:26:27.153 "adrfam": "ipv4", 00:26:27.153 "trsvcid": "4420", 00:26:27.153 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:27.153 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:27.153 "hdgst": false, 00:26:27.153 "ddgst": false 00:26:27.153 }, 00:26:27.153 "method": "bdev_nvme_attach_controller" 00:26:27.153 },{ 00:26:27.153 "params": { 00:26:27.153 "name": "Nvme10", 00:26:27.153 "trtype": "tcp", 00:26:27.153 "traddr": "10.0.0.2", 00:26:27.153 "adrfam": "ipv4", 00:26:27.153 "trsvcid": "4420", 00:26:27.153 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:27.153 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:27.153 "hdgst": false, 00:26:27.153 "ddgst": false 00:26:27.153 }, 00:26:27.153 "method": "bdev_nvme_attach_controller" 00:26:27.153 }' 00:26:27.153 [2024-11-27 07:21:38.237348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.153 [2024-11-27 07:21:38.291626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.539 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:28.539 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:26:28.539 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:28.539 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.539 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:28.539 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.539 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2459759 00:26:28.539 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:26:28.539 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:26:29.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2459759 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:26:29.482 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2459428 00:26:29.482 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:26:29.482 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:29.482 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:26:29.482 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:26:29.482 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:29.482 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:29.482 { 00:26:29.482 "params": { 00:26:29.482 "name": "Nvme$subsystem", 00:26:29.482 "trtype": "$TEST_TRANSPORT", 00:26:29.482 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:29.482 "adrfam": "ipv4", 00:26:29.482 "trsvcid": "$NVMF_PORT", 00:26:29.482 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:29.482 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:29.482 "hdgst": ${hdgst:-false}, 00:26:29.482 "ddgst": ${ddgst:-false} 00:26:29.482 }, 00:26:29.482 "method": "bdev_nvme_attach_controller" 00:26:29.482 } 00:26:29.482 EOF 00:26:29.482 )") 00:26:29.482 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:29.483 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:29.483 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:29.483 { 00:26:29.483 "params": { 00:26:29.483 "name": "Nvme$subsystem", 00:26:29.483 "trtype": "$TEST_TRANSPORT", 00:26:29.483 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:29.483 "adrfam": "ipv4", 00:26:29.483 "trsvcid": "$NVMF_PORT", 00:26:29.483 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:29.483 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:29.483 "hdgst": ${hdgst:-false}, 00:26:29.483 "ddgst": ${ddgst:-false} 00:26:29.483 }, 00:26:29.483 "method": "bdev_nvme_attach_controller" 00:26:29.483 } 00:26:29.483 EOF 00:26:29.483 )") 00:26:29.483 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:29.483 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:29.483 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:29.483 { 00:26:29.483 "params": { 00:26:29.483 "name": "Nvme$subsystem", 00:26:29.483 "trtype": "$TEST_TRANSPORT", 00:26:29.483 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:29.483 "adrfam": "ipv4", 00:26:29.483 "trsvcid": "$NVMF_PORT", 00:26:29.483 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:29.483 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:29.483 "hdgst": ${hdgst:-false}, 00:26:29.483 "ddgst": ${ddgst:-false} 00:26:29.483 }, 00:26:29.483 "method": "bdev_nvme_attach_controller" 00:26:29.483 } 00:26:29.483 EOF 00:26:29.483 )") 00:26:29.483 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:29.483 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:29.483 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:29.483 { 00:26:29.483 "params": { 00:26:29.483 "name": "Nvme$subsystem", 00:26:29.483 "trtype": "$TEST_TRANSPORT", 00:26:29.483 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:29.483 "adrfam": "ipv4", 00:26:29.483 "trsvcid": "$NVMF_PORT", 00:26:29.483 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:29.483 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:29.483 "hdgst": ${hdgst:-false}, 00:26:29.483 "ddgst": ${ddgst:-false} 00:26:29.483 }, 00:26:29.483 "method": "bdev_nvme_attach_controller" 00:26:29.483 } 00:26:29.483 EOF 00:26:29.483 )") 00:26:29.483 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:29.483 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:29.483 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:29.483 { 00:26:29.483 "params": { 00:26:29.483 "name": "Nvme$subsystem", 00:26:29.483 "trtype": "$TEST_TRANSPORT", 00:26:29.483 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:29.483 "adrfam": "ipv4", 00:26:29.483 "trsvcid": "$NVMF_PORT", 00:26:29.483 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:29.483 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:29.483 "hdgst": ${hdgst:-false}, 00:26:29.483 "ddgst": ${ddgst:-false} 00:26:29.483 }, 00:26:29.483 "method": "bdev_nvme_attach_controller" 00:26:29.483 } 00:26:29.483 EOF 00:26:29.483 )") 00:26:29.483 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:29.483 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:29.483 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:29.483 { 00:26:29.483 "params": { 00:26:29.483 "name": "Nvme$subsystem", 00:26:29.483 "trtype": "$TEST_TRANSPORT", 00:26:29.483 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:29.483 "adrfam": "ipv4", 00:26:29.483 "trsvcid": "$NVMF_PORT", 00:26:29.483 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:29.483 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:29.483 "hdgst": ${hdgst:-false}, 00:26:29.483 "ddgst": ${ddgst:-false} 00:26:29.483 }, 00:26:29.483 "method": "bdev_nvme_attach_controller" 00:26:29.483 } 00:26:29.483 EOF 00:26:29.483 )") 00:26:29.483 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:29.483 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:29.483 [2024-11-27 07:21:40.613287] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:26:29.483 [2024-11-27 07:21:40.613344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2460390 ] 00:26:29.483 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:29.483 { 00:26:29.483 "params": { 00:26:29.483 "name": "Nvme$subsystem", 00:26:29.483 "trtype": "$TEST_TRANSPORT", 00:26:29.483 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:29.483 "adrfam": "ipv4", 00:26:29.483 "trsvcid": "$NVMF_PORT", 00:26:29.483 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:29.483 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:29.483 "hdgst": ${hdgst:-false}, 00:26:29.483 "ddgst": ${ddgst:-false} 00:26:29.483 }, 00:26:29.483 "method": "bdev_nvme_attach_controller" 00:26:29.483 } 00:26:29.483 EOF 00:26:29.483 )") 00:26:29.483 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:29.483 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:29.483 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:29.483 { 00:26:29.483 "params": { 00:26:29.483 "name": "Nvme$subsystem", 00:26:29.483 "trtype": "$TEST_TRANSPORT", 00:26:29.483 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:29.483 "adrfam": "ipv4", 00:26:29.483 "trsvcid": "$NVMF_PORT", 00:26:29.483 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:29.483 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:29.483 "hdgst": ${hdgst:-false}, 00:26:29.483 "ddgst": ${ddgst:-false} 00:26:29.483 }, 00:26:29.483 "method": "bdev_nvme_attach_controller" 00:26:29.483 } 00:26:29.483 EOF 00:26:29.483 )") 00:26:29.483 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:29.483 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:29.483 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:29.483 { 00:26:29.483 "params": { 00:26:29.483 "name": "Nvme$subsystem", 00:26:29.483 "trtype": "$TEST_TRANSPORT", 00:26:29.483 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:29.483 "adrfam": "ipv4", 00:26:29.483 "trsvcid": "$NVMF_PORT", 00:26:29.483 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:29.483 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:29.483 "hdgst": ${hdgst:-false}, 00:26:29.483 "ddgst": ${ddgst:-false} 00:26:29.483 }, 00:26:29.483 "method": "bdev_nvme_attach_controller" 00:26:29.483 } 00:26:29.483 EOF 00:26:29.483 )") 00:26:29.483 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:29.483 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:29.483 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:29.483 { 00:26:29.483 "params": { 00:26:29.483 "name": "Nvme$subsystem", 00:26:29.483 "trtype": "$TEST_TRANSPORT", 00:26:29.483 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:29.483 "adrfam": "ipv4", 00:26:29.483 "trsvcid": "$NVMF_PORT", 00:26:29.483 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:29.483 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:29.483 "hdgst": ${hdgst:-false}, 00:26:29.483 "ddgst": ${ddgst:-false} 00:26:29.483 }, 00:26:29.483 "method": "bdev_nvme_attach_controller" 00:26:29.483 } 00:26:29.483 EOF 00:26:29.483 )") 00:26:29.483 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:29.483 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:26:29.483 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:26:29.483 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:29.483 "params": { 00:26:29.483 "name": "Nvme1", 00:26:29.483 "trtype": "tcp", 00:26:29.483 "traddr": "10.0.0.2", 00:26:29.483 "adrfam": "ipv4", 00:26:29.483 "trsvcid": "4420", 00:26:29.483 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:29.483 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:29.483 "hdgst": false, 00:26:29.483 "ddgst": false 00:26:29.483 }, 00:26:29.483 "method": "bdev_nvme_attach_controller" 00:26:29.483 },{ 00:26:29.483 "params": { 00:26:29.483 "name": "Nvme2", 00:26:29.483 "trtype": "tcp", 00:26:29.483 "traddr": "10.0.0.2", 00:26:29.483 "adrfam": "ipv4", 00:26:29.483 "trsvcid": "4420", 00:26:29.483 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:29.483 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:29.483 "hdgst": false, 00:26:29.483 "ddgst": false 00:26:29.483 }, 00:26:29.483 "method": "bdev_nvme_attach_controller" 00:26:29.483 },{ 00:26:29.483 "params": { 00:26:29.483 "name": "Nvme3", 00:26:29.483 "trtype": "tcp", 00:26:29.483 "traddr": "10.0.0.2", 00:26:29.483 "adrfam": "ipv4", 00:26:29.483 "trsvcid": "4420", 00:26:29.484 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:29.484 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:29.484 "hdgst": false, 00:26:29.484 "ddgst": false 00:26:29.484 }, 00:26:29.484 "method": "bdev_nvme_attach_controller" 00:26:29.484 },{ 00:26:29.484 "params": { 00:26:29.484 "name": "Nvme4", 00:26:29.484 "trtype": "tcp", 00:26:29.484 "traddr": "10.0.0.2", 00:26:29.484 "adrfam": "ipv4", 00:26:29.484 "trsvcid": "4420", 00:26:29.484 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:29.484 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:29.484 "hdgst": false, 00:26:29.484 "ddgst": false 00:26:29.484 }, 00:26:29.484 "method": "bdev_nvme_attach_controller" 00:26:29.484 },{ 00:26:29.484 "params": { 00:26:29.484 "name": "Nvme5", 00:26:29.484 "trtype": "tcp", 00:26:29.484 "traddr": "10.0.0.2", 00:26:29.484 "adrfam": "ipv4", 00:26:29.484 "trsvcid": "4420", 00:26:29.484 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:29.484 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:29.484 "hdgst": false, 00:26:29.484 "ddgst": false 00:26:29.484 }, 00:26:29.484 "method": "bdev_nvme_attach_controller" 00:26:29.484 },{ 00:26:29.484 "params": { 00:26:29.484 "name": "Nvme6", 00:26:29.484 "trtype": "tcp", 00:26:29.484 "traddr": "10.0.0.2", 00:26:29.484 "adrfam": "ipv4", 00:26:29.484 "trsvcid": "4420", 00:26:29.484 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:29.484 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:29.484 "hdgst": false, 00:26:29.484 "ddgst": false 00:26:29.484 }, 00:26:29.484 "method": "bdev_nvme_attach_controller" 00:26:29.484 },{ 00:26:29.484 "params": { 00:26:29.484 "name": "Nvme7", 00:26:29.484 "trtype": "tcp", 00:26:29.484 "traddr": "10.0.0.2", 00:26:29.484 "adrfam": "ipv4", 00:26:29.484 "trsvcid": "4420", 00:26:29.484 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:29.484 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:29.484 "hdgst": false, 00:26:29.484 "ddgst": false 00:26:29.484 }, 00:26:29.484 "method": "bdev_nvme_attach_controller" 00:26:29.484 },{ 00:26:29.484 "params": { 00:26:29.484 "name": "Nvme8", 00:26:29.484 "trtype": "tcp", 00:26:29.484 "traddr": "10.0.0.2", 00:26:29.484 "adrfam": "ipv4", 00:26:29.484 "trsvcid": "4420", 00:26:29.484 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:29.484 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:29.484 "hdgst": false, 00:26:29.484 "ddgst": false 00:26:29.484 }, 00:26:29.484 "method": "bdev_nvme_attach_controller" 00:26:29.484 },{ 00:26:29.484 "params": { 00:26:29.484 "name": "Nvme9", 00:26:29.484 "trtype": "tcp", 00:26:29.484 "traddr": "10.0.0.2", 00:26:29.484 "adrfam": "ipv4", 00:26:29.484 "trsvcid": "4420", 00:26:29.484 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:29.484 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:29.484 "hdgst": false, 00:26:29.484 "ddgst": false 00:26:29.484 }, 00:26:29.484 "method": "bdev_nvme_attach_controller" 00:26:29.484 },{ 00:26:29.484 "params": { 00:26:29.484 "name": "Nvme10", 00:26:29.484 "trtype": "tcp", 00:26:29.484 "traddr": "10.0.0.2", 00:26:29.484 "adrfam": "ipv4", 00:26:29.484 "trsvcid": "4420", 00:26:29.484 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:29.484 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:29.484 "hdgst": false, 00:26:29.484 "ddgst": false 00:26:29.484 }, 00:26:29.484 "method": "bdev_nvme_attach_controller" 00:26:29.484 }' 00:26:29.745 [2024-11-27 07:21:40.703610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.745 [2024-11-27 07:21:40.739299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.132 Running I/O for 1 seconds... 00:26:32.074 1920.00 IOPS, 120.00 MiB/s 00:26:32.074 Latency(us) 00:26:32.074 [2024-11-27T06:21:43.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:32.074 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:32.074 Verification LBA range: start 0x0 length 0x400 00:26:32.074 Nvme1n1 : 1.17 219.56 13.72 0.00 0.00 288184.53 16384.00 249910.61 00:26:32.074 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:32.074 Verification LBA range: start 0x0 length 0x400 00:26:32.074 Nvme2n1 : 1.14 225.08 14.07 0.00 0.00 276763.31 18459.31 241172.48 00:26:32.074 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:32.074 Verification LBA range: start 0x0 length 0x400 00:26:32.074 Nvme3n1 : 1.08 237.85 14.87 0.00 0.00 256793.60 15073.28 256901.12 00:26:32.074 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:32.074 Verification LBA range: start 0x0 length 0x400 00:26:32.074 Nvme4n1 : 1.17 276.35 17.27 0.00 0.00 217337.38 2908.16 248162.99 00:26:32.074 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:32.074 Verification LBA range: start 0x0 length 0x400 00:26:32.074 Nvme5n1 : 1.18 217.51 13.59 0.00 0.00 272488.32 24248.32 251658.24 00:26:32.074 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:32.074 Verification LBA range: start 0x0 length 0x400 00:26:32.074 Nvme6n1 : 1.18 216.78 13.55 0.00 0.00 268733.87 18131.63 281367.89 00:26:32.074 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:32.074 Verification LBA range: start 0x0 length 0x400 00:26:32.074 Nvme7n1 : 1.14 224.59 14.04 0.00 0.00 253659.95 16602.45 256901.12 00:26:32.074 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:32.074 Verification LBA range: start 0x0 length 0x400 00:26:32.074 Nvme8n1 : 1.19 269.64 16.85 0.00 0.00 208501.08 21299.20 244667.73 00:26:32.074 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:32.074 Verification LBA range: start 0x0 length 0x400 00:26:32.074 Nvme9n1 : 1.20 267.54 16.72 0.00 0.00 206493.53 14199.47 248162.99 00:26:32.074 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:32.074 Verification LBA range: start 0x0 length 0x400 00:26:32.074 Nvme10n1 : 1.19 269.20 16.83 0.00 0.00 201142.44 14199.47 237677.23 00:26:32.074 [2024-11-27T06:21:43.279Z] =================================================================================================================== 00:26:32.074 [2024-11-27T06:21:43.279Z] Total : 2424.09 151.51 0.00 0.00 241644.26 2908.16 281367.89 00:26:32.335 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:26:32.335 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:32.335 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:32.335 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:32.335 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:32.335 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:32.335 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:26:32.335 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:32.335 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:26:32.335 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:32.335 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:32.335 rmmod nvme_tcp 00:26:32.335 rmmod nvme_fabrics 00:26:32.335 rmmod nvme_keyring 00:26:32.335 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:32.335 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:26:32.335 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:26:32.335 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2459428 ']' 00:26:32.335 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2459428 00:26:32.335 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2459428 ']' 00:26:32.335 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2459428 00:26:32.335 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:26:32.335 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:32.335 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2459428 00:26:32.597 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:32.597 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:32.597 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2459428' 00:26:32.597 killing process with pid 2459428 00:26:32.597 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2459428 00:26:32.597 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2459428 00:26:32.597 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:32.597 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:32.597 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:32.597 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:26:32.597 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:26:32.597 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:32.597 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:26:32.858 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:32.858 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:32.858 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.858 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:32.858 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.776 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:34.776 00:26:34.776 real 0m16.971s 00:26:34.776 user 0m34.081s 00:26:34.776 sys 0m7.070s 00:26:34.776 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:34.776 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:34.776 ************************************ 00:26:34.776 END TEST nvmf_shutdown_tc1 00:26:34.776 ************************************ 00:26:34.776 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:26:34.776 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:34.776 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:34.776 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:34.776 ************************************ 00:26:34.776 START TEST nvmf_shutdown_tc2 00:26:34.776 ************************************ 00:26:34.776 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:26:34.776 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:26:34.776 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:34.776 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:34.776 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:34.776 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:34.776 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:34.776 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:34.776 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:34.776 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:34.776 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:35.038 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:35.038 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:35.038 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:35.038 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:35.038 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:35.039 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:35.039 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:35.039 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:35.039 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:35.039 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:35.039 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:35.039 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:35.039 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:35.039 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:35.039 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:35.039 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:35.039 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:35.039 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:35.039 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:35.039 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:35.039 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:35.039 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:35.039 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:35.301 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:35.301 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:35.301 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:35.301 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:35.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:35.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.597 ms 00:26:35.301 00:26:35.301 --- 10.0.0.2 ping statistics --- 00:26:35.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:35.301 rtt min/avg/max/mdev = 0.597/0.597/0.597/0.000 ms 00:26:35.301 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:35.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:35.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:26:35.301 00:26:35.301 --- 10.0.0.1 ping statistics --- 00:26:35.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:35.301 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:26:35.301 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:35.301 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:26:35.301 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:35.301 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:35.301 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:35.301 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:35.301 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:35.301 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:35.301 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:35.301 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:35.301 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:35.301 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:35.301 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:35.301 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2461563 00:26:35.301 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2461563 00:26:35.301 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2461563 ']' 00:26:35.301 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:35.301 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:35.301 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:35.301 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:35.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:35.301 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:35.301 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:35.301 [2024-11-27 07:21:46.410208] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:26:35.301 [2024-11-27 07:21:46.410276] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:35.562 [2024-11-27 07:21:46.505661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:35.562 [2024-11-27 07:21:46.543787] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:35.562 [2024-11-27 07:21:46.543822] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:35.562 [2024-11-27 07:21:46.543828] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:35.562 [2024-11-27 07:21:46.543833] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:35.562 [2024-11-27 07:21:46.543837] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:35.562 [2024-11-27 07:21:46.545224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:35.562 [2024-11-27 07:21:46.545551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:35.562 [2024-11-27 07:21:46.545668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.562 [2024-11-27 07:21:46.545668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:36.134 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:36.134 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:26:36.134 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:36.134 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:36.134 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:36.134 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:36.134 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:36.134 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.134 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:36.134 [2024-11-27 07:21:47.251265] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:36.134 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.134 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:36.134 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:36.134 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:36.134 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:36.134 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:36.134 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:36.134 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:36.135 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:36.135 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:36.135 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:36.135 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:36.135 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:36.135 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:36.135 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:36.135 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:36.135 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:36.135 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:36.135 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:36.135 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:36.135 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:36.135 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:36.135 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:36.135 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:36.135 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:36.135 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:36.135 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:36.135 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.135 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:36.396 Malloc1 00:26:36.396 [2024-11-27 07:21:47.374707] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:36.396 Malloc2 00:26:36.396 Malloc3 00:26:36.396 Malloc4 00:26:36.396 Malloc5 00:26:36.396 Malloc6 00:26:36.396 Malloc7 00:26:36.658 Malloc8 00:26:36.658 Malloc9 00:26:36.658 Malloc10 00:26:36.658 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.658 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:36.658 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:36.658 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:36.658 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2461942 00:26:36.658 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2461942 /var/tmp/bdevperf.sock 00:26:36.658 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2461942 ']' 00:26:36.658 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:36.658 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:36.658 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:36.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:36.658 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:36.658 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:36.658 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:36.658 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:36.658 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:26:36.658 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:26:36.658 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:36.658 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:36.658 { 00:26:36.658 "params": { 00:26:36.658 "name": "Nvme$subsystem", 00:26:36.658 "trtype": "$TEST_TRANSPORT", 00:26:36.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.658 "adrfam": "ipv4", 00:26:36.658 "trsvcid": "$NVMF_PORT", 00:26:36.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.658 "hdgst": ${hdgst:-false}, 00:26:36.658 "ddgst": ${ddgst:-false} 00:26:36.658 }, 00:26:36.658 "method": "bdev_nvme_attach_controller" 00:26:36.658 } 00:26:36.658 EOF 00:26:36.658 )") 00:26:36.658 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:36.658 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:36.658 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:36.658 { 00:26:36.658 "params": { 00:26:36.658 "name": "Nvme$subsystem", 00:26:36.658 "trtype": "$TEST_TRANSPORT", 00:26:36.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.658 "adrfam": "ipv4", 00:26:36.658 "trsvcid": "$NVMF_PORT", 00:26:36.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.658 "hdgst": ${hdgst:-false}, 00:26:36.658 "ddgst": ${ddgst:-false} 00:26:36.658 }, 00:26:36.658 "method": "bdev_nvme_attach_controller" 00:26:36.658 } 00:26:36.658 EOF 00:26:36.658 )") 00:26:36.658 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:36.658 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:36.658 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:36.658 { 00:26:36.658 "params": { 00:26:36.658 "name": "Nvme$subsystem", 00:26:36.658 "trtype": "$TEST_TRANSPORT", 00:26:36.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.658 "adrfam": "ipv4", 00:26:36.658 "trsvcid": "$NVMF_PORT", 00:26:36.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.658 "hdgst": ${hdgst:-false}, 00:26:36.658 "ddgst": ${ddgst:-false} 00:26:36.659 }, 00:26:36.659 "method": "bdev_nvme_attach_controller" 00:26:36.659 } 00:26:36.659 EOF 00:26:36.659 )") 00:26:36.659 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:36.659 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:36.659 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:36.659 { 00:26:36.659 "params": { 00:26:36.659 "name": "Nvme$subsystem", 00:26:36.659 "trtype": "$TEST_TRANSPORT", 00:26:36.659 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.659 "adrfam": "ipv4", 00:26:36.659 "trsvcid": "$NVMF_PORT", 00:26:36.659 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.659 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.659 "hdgst": ${hdgst:-false}, 00:26:36.659 "ddgst": ${ddgst:-false} 00:26:36.659 }, 00:26:36.659 "method": "bdev_nvme_attach_controller" 00:26:36.659 } 00:26:36.659 EOF 00:26:36.659 )") 00:26:36.659 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:36.659 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:36.659 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:36.659 { 00:26:36.659 "params": { 00:26:36.659 "name": "Nvme$subsystem", 00:26:36.659 "trtype": "$TEST_TRANSPORT", 00:26:36.659 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.659 "adrfam": "ipv4", 00:26:36.659 "trsvcid": "$NVMF_PORT", 00:26:36.659 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.659 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.659 "hdgst": ${hdgst:-false}, 00:26:36.659 "ddgst": ${ddgst:-false} 00:26:36.659 }, 00:26:36.659 "method": "bdev_nvme_attach_controller" 00:26:36.659 } 00:26:36.659 EOF 00:26:36.659 )") 00:26:36.659 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:36.659 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:36.659 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:36.659 { 00:26:36.659 "params": { 00:26:36.659 "name": "Nvme$subsystem", 00:26:36.659 "trtype": "$TEST_TRANSPORT", 00:26:36.659 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.659 "adrfam": "ipv4", 00:26:36.659 "trsvcid": "$NVMF_PORT", 00:26:36.659 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.659 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.659 "hdgst": ${hdgst:-false}, 00:26:36.659 "ddgst": ${ddgst:-false} 00:26:36.659 }, 00:26:36.659 "method": "bdev_nvme_attach_controller" 00:26:36.659 } 00:26:36.659 EOF 00:26:36.659 )") 00:26:36.659 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:36.659 [2024-11-27 07:21:47.830731] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:26:36.659 [2024-11-27 07:21:47.830785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2461942 ] 00:26:36.659 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:36.659 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:36.659 { 00:26:36.659 "params": { 00:26:36.659 "name": "Nvme$subsystem", 00:26:36.659 "trtype": "$TEST_TRANSPORT", 00:26:36.659 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.659 "adrfam": "ipv4", 00:26:36.659 "trsvcid": "$NVMF_PORT", 00:26:36.659 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.659 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.659 "hdgst": ${hdgst:-false}, 00:26:36.659 "ddgst": ${ddgst:-false} 00:26:36.659 }, 00:26:36.659 "method": "bdev_nvme_attach_controller" 00:26:36.659 } 00:26:36.659 EOF 00:26:36.659 )") 00:26:36.659 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:36.659 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:36.659 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:36.659 { 00:26:36.659 "params": { 00:26:36.659 "name": "Nvme$subsystem", 00:26:36.659 "trtype": "$TEST_TRANSPORT", 00:26:36.659 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.659 "adrfam": "ipv4", 00:26:36.659 "trsvcid": "$NVMF_PORT", 00:26:36.659 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.659 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.659 "hdgst": ${hdgst:-false}, 00:26:36.659 "ddgst": ${ddgst:-false} 00:26:36.659 }, 00:26:36.659 "method": "bdev_nvme_attach_controller" 00:26:36.659 } 00:26:36.659 EOF 00:26:36.659 )") 00:26:36.659 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:36.659 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:36.659 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:36.659 { 00:26:36.659 "params": { 00:26:36.659 "name": "Nvme$subsystem", 00:26:36.659 "trtype": "$TEST_TRANSPORT", 00:26:36.659 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.659 "adrfam": "ipv4", 00:26:36.659 "trsvcid": "$NVMF_PORT", 00:26:36.659 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.659 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.659 "hdgst": ${hdgst:-false}, 00:26:36.659 "ddgst": ${ddgst:-false} 00:26:36.659 }, 00:26:36.659 "method": "bdev_nvme_attach_controller" 00:26:36.659 } 00:26:36.659 EOF 00:26:36.659 )") 00:26:36.659 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:36.659 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:36.659 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:36.659 { 00:26:36.659 "params": { 00:26:36.659 "name": "Nvme$subsystem", 00:26:36.659 "trtype": "$TEST_TRANSPORT", 00:26:36.659 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.659 "adrfam": "ipv4", 00:26:36.659 "trsvcid": "$NVMF_PORT", 00:26:36.659 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.659 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.659 "hdgst": ${hdgst:-false}, 00:26:36.659 "ddgst": ${ddgst:-false} 00:26:36.659 }, 00:26:36.659 "method": "bdev_nvme_attach_controller" 00:26:36.659 } 00:26:36.659 EOF 00:26:36.659 )") 00:26:36.659 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:36.921 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:26:36.921 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:26:36.921 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:36.921 "params": { 00:26:36.921 "name": "Nvme1", 00:26:36.921 "trtype": "tcp", 00:26:36.921 "traddr": "10.0.0.2", 00:26:36.921 "adrfam": "ipv4", 00:26:36.921 "trsvcid": "4420", 00:26:36.921 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:36.921 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:36.921 "hdgst": false, 00:26:36.921 "ddgst": false 00:26:36.921 }, 00:26:36.921 "method": "bdev_nvme_attach_controller" 00:26:36.921 },{ 00:26:36.921 "params": { 00:26:36.921 "name": "Nvme2", 00:26:36.921 "trtype": "tcp", 00:26:36.921 "traddr": "10.0.0.2", 00:26:36.921 "adrfam": "ipv4", 00:26:36.921 "trsvcid": "4420", 00:26:36.921 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:36.921 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:36.921 "hdgst": false, 00:26:36.921 "ddgst": false 00:26:36.921 }, 00:26:36.921 "method": "bdev_nvme_attach_controller" 00:26:36.921 },{ 00:26:36.921 "params": { 00:26:36.921 "name": "Nvme3", 00:26:36.921 "trtype": "tcp", 00:26:36.921 "traddr": "10.0.0.2", 00:26:36.921 "adrfam": "ipv4", 00:26:36.921 "trsvcid": "4420", 00:26:36.921 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:36.921 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:36.921 "hdgst": false, 00:26:36.921 "ddgst": false 00:26:36.921 }, 00:26:36.921 "method": "bdev_nvme_attach_controller" 00:26:36.921 },{ 00:26:36.921 "params": { 00:26:36.921 "name": "Nvme4", 00:26:36.921 "trtype": "tcp", 00:26:36.921 "traddr": "10.0.0.2", 00:26:36.921 "adrfam": "ipv4", 00:26:36.921 "trsvcid": "4420", 00:26:36.921 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:36.921 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:36.921 "hdgst": false, 00:26:36.921 "ddgst": false 00:26:36.921 }, 00:26:36.921 "method": "bdev_nvme_attach_controller" 00:26:36.921 },{ 00:26:36.921 "params": { 00:26:36.921 "name": "Nvme5", 00:26:36.921 "trtype": "tcp", 00:26:36.921 "traddr": "10.0.0.2", 00:26:36.921 "adrfam": "ipv4", 00:26:36.921 "trsvcid": "4420", 00:26:36.921 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:36.921 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:36.921 "hdgst": false, 00:26:36.921 "ddgst": false 00:26:36.921 }, 00:26:36.921 "method": "bdev_nvme_attach_controller" 00:26:36.921 },{ 00:26:36.921 "params": { 00:26:36.921 "name": "Nvme6", 00:26:36.921 "trtype": "tcp", 00:26:36.921 "traddr": "10.0.0.2", 00:26:36.921 "adrfam": "ipv4", 00:26:36.921 "trsvcid": "4420", 00:26:36.921 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:36.921 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:36.921 "hdgst": false, 00:26:36.921 "ddgst": false 00:26:36.921 }, 00:26:36.921 "method": "bdev_nvme_attach_controller" 00:26:36.921 },{ 00:26:36.921 "params": { 00:26:36.921 "name": "Nvme7", 00:26:36.921 "trtype": "tcp", 00:26:36.921 "traddr": "10.0.0.2", 00:26:36.921 "adrfam": "ipv4", 00:26:36.921 "trsvcid": "4420", 00:26:36.921 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:36.921 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:36.921 "hdgst": false, 00:26:36.921 "ddgst": false 00:26:36.921 }, 00:26:36.921 "method": "bdev_nvme_attach_controller" 00:26:36.921 },{ 00:26:36.921 "params": { 00:26:36.921 "name": "Nvme8", 00:26:36.921 "trtype": "tcp", 00:26:36.921 "traddr": "10.0.0.2", 00:26:36.921 "adrfam": "ipv4", 00:26:36.921 "trsvcid": "4420", 00:26:36.921 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:36.921 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:36.921 "hdgst": false, 00:26:36.921 "ddgst": false 00:26:36.921 }, 00:26:36.921 "method": "bdev_nvme_attach_controller" 00:26:36.921 },{ 00:26:36.921 "params": { 00:26:36.921 "name": "Nvme9", 00:26:36.921 "trtype": "tcp", 00:26:36.921 "traddr": "10.0.0.2", 00:26:36.921 "adrfam": "ipv4", 00:26:36.921 "trsvcid": "4420", 00:26:36.921 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:36.921 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:36.921 "hdgst": false, 00:26:36.921 "ddgst": false 00:26:36.921 }, 00:26:36.921 "method": "bdev_nvme_attach_controller" 00:26:36.921 },{ 00:26:36.921 "params": { 00:26:36.921 "name": "Nvme10", 00:26:36.921 "trtype": "tcp", 00:26:36.921 "traddr": "10.0.0.2", 00:26:36.921 "adrfam": "ipv4", 00:26:36.921 "trsvcid": "4420", 00:26:36.921 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:36.921 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:36.921 "hdgst": false, 00:26:36.921 "ddgst": false 00:26:36.921 }, 00:26:36.921 "method": "bdev_nvme_attach_controller" 00:26:36.921 }' 00:26:36.921 [2024-11-27 07:21:47.919296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.921 [2024-11-27 07:21:47.956598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.834 Running I/O for 10 seconds... 00:26:38.834 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:38.834 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:26:38.834 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:38.834 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.834 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:38.834 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.834 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:38.834 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:38.834 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:26:38.834 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:26:38.834 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:26:38.834 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:26:38.834 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:38.834 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:38.834 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:38.834 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.834 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:38.834 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.834 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:26:38.834 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:26:38.834 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:26:39.095 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:26:39.095 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:39.095 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:39.095 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.095 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:39.095 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:39.095 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.095 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:26:39.095 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:26:39.095 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:26:39.356 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:26:39.356 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:39.356 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:39.356 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:39.356 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.356 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:39.356 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.356 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:26:39.356 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:26:39.356 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:26:39.356 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:26:39.356 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:26:39.356 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2461942 00:26:39.356 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2461942 ']' 00:26:39.356 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2461942 00:26:39.356 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:26:39.356 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:39.356 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2461942 00:26:39.356 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:39.356 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:39.356 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2461942' 00:26:39.356 killing process with pid 2461942 00:26:39.356 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2461942 00:26:39.356 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2461942 00:26:39.356 Received shutdown signal, test time was about 0.983506 seconds 00:26:39.356 00:26:39.356 Latency(us) 00:26:39.356 [2024-11-27T06:21:50.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:39.356 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.356 Verification LBA range: start 0x0 length 0x400 00:26:39.356 Nvme1n1 : 0.97 262.65 16.42 0.00 0.00 240884.69 20097.71 227191.47 00:26:39.356 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.356 Verification LBA range: start 0x0 length 0x400 00:26:39.356 Nvme2n1 : 0.97 264.21 16.51 0.00 0.00 234651.73 22063.79 251658.24 00:26:39.356 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.356 Verification LBA range: start 0x0 length 0x400 00:26:39.356 Nvme3n1 : 0.97 263.36 16.46 0.00 0.00 230679.89 17694.72 246415.36 00:26:39.356 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.356 Verification LBA range: start 0x0 length 0x400 00:26:39.356 Nvme4n1 : 0.97 264.97 16.56 0.00 0.00 224501.12 20534.61 225443.84 00:26:39.356 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.356 Verification LBA range: start 0x0 length 0x400 00:26:39.356 Nvme5n1 : 0.95 201.36 12.59 0.00 0.00 288788.48 31020.37 235929.60 00:26:39.356 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.356 Verification LBA range: start 0x0 length 0x400 00:26:39.356 Nvme6n1 : 0.95 211.73 13.23 0.00 0.00 266699.56 3372.37 251658.24 00:26:39.356 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.356 Verification LBA range: start 0x0 length 0x400 00:26:39.356 Nvme7n1 : 0.98 260.53 16.28 0.00 0.00 214342.40 15291.73 256901.12 00:26:39.356 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.356 Verification LBA range: start 0x0 length 0x400 00:26:39.356 Nvme8n1 : 0.98 261.80 16.36 0.00 0.00 208452.27 16602.45 246415.36 00:26:39.356 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.356 Verification LBA range: start 0x0 length 0x400 00:26:39.356 Nvme9n1 : 0.96 199.52 12.47 0.00 0.00 266562.84 16274.77 270882.13 00:26:39.356 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.356 Verification LBA range: start 0x0 length 0x400 00:26:39.356 Nvme10n1 : 0.96 200.21 12.51 0.00 0.00 259159.32 17694.72 251658.24 00:26:39.356 [2024-11-27T06:21:50.561Z] =================================================================================================================== 00:26:39.356 [2024-11-27T06:21:50.561Z] Total : 2390.34 149.40 0.00 0.00 240593.06 3372.37 270882.13 00:26:39.617 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:26:40.560 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2461563 00:26:40.560 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:26:40.560 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:40.560 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:40.560 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:40.560 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:40.560 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:40.560 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:26:40.560 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:40.560 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:26:40.560 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:40.560 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:40.560 rmmod nvme_tcp 00:26:40.560 rmmod nvme_fabrics 00:26:40.560 rmmod nvme_keyring 00:26:40.821 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:40.821 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:26:40.821 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:26:40.821 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2461563 ']' 00:26:40.821 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2461563 00:26:40.821 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2461563 ']' 00:26:40.821 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2461563 00:26:40.821 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:26:40.821 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:40.821 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2461563 00:26:40.821 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:40.821 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:40.821 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2461563' 00:26:40.821 killing process with pid 2461563 00:26:40.821 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2461563 00:26:40.821 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2461563 00:26:41.083 07:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:41.083 07:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:41.083 07:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:41.083 07:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:26:41.083 07:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:26:41.083 07:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:41.083 07:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:26:41.083 07:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:41.083 07:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:41.083 07:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.083 07:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:41.083 07:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:43.000 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:43.000 00:26:43.000 real 0m8.184s 00:26:43.000 user 0m25.149s 00:26:43.000 sys 0m1.348s 00:26:43.000 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:43.000 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:43.000 ************************************ 00:26:43.000 END TEST nvmf_shutdown_tc2 00:26:43.000 ************************************ 00:26:43.000 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:26:43.000 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:43.000 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:43.000 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:43.262 ************************************ 00:26:43.262 START TEST nvmf_shutdown_tc3 00:26:43.262 ************************************ 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:43.262 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:43.263 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:43.263 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:43.263 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:43.263 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:43.263 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:43.525 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:43.525 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:43.525 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:43.525 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:43.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:43.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:26:43.525 00:26:43.525 --- 10.0.0.2 ping statistics --- 00:26:43.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:43.525 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:26:43.525 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:43.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:43.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:26:43.525 00:26:43.525 --- 10.0.0.1 ping statistics --- 00:26:43.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:43.525 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:26:43.525 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:43.525 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:26:43.525 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:43.525 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:43.525 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:43.525 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:43.525 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:43.525 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:43.525 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:43.525 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:43.525 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:43.525 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:43.525 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:43.525 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2463333 00:26:43.525 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2463333 00:26:43.525 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2463333 ']' 00:26:43.525 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:43.525 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:43.525 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:43.525 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:43.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:43.525 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:43.525 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:43.525 [2024-11-27 07:21:54.706475] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:26:43.525 [2024-11-27 07:21:54.706540] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:43.788 [2024-11-27 07:21:54.803912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:43.788 [2024-11-27 07:21:54.839037] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:43.788 [2024-11-27 07:21:54.839073] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:43.788 [2024-11-27 07:21:54.839079] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:43.788 [2024-11-27 07:21:54.839084] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:43.788 [2024-11-27 07:21:54.839088] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:43.788 [2024-11-27 07:21:54.840668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:43.788 [2024-11-27 07:21:54.840820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:43.788 [2024-11-27 07:21:54.840971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:43.788 [2024-11-27 07:21:54.840973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:44.358 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:44.358 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:26:44.358 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:44.358 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:44.358 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:44.358 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:44.358 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:44.358 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.358 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:44.358 [2024-11-27 07:21:55.556706] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:44.358 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.619 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:44.619 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:44.619 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:44.619 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:44.619 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:44.619 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:44.619 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:44.619 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:44.619 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:44.619 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:44.619 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:44.619 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:44.619 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:44.619 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:44.619 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:44.619 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:44.619 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:44.619 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:44.619 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:44.619 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:44.619 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:44.619 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:44.619 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:44.619 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:44.619 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:44.619 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:44.619 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.619 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:44.619 Malloc1 00:26:44.619 [2024-11-27 07:21:55.683187] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:44.619 Malloc2 00:26:44.619 Malloc3 00:26:44.619 Malloc4 00:26:44.619 Malloc5 00:26:44.879 Malloc6 00:26:44.879 Malloc7 00:26:44.879 Malloc8 00:26:44.879 Malloc9 00:26:44.879 Malloc10 00:26:44.879 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.879 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:44.879 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:44.879 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:44.879 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2463570 00:26:44.879 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2463570 /var/tmp/bdevperf.sock 00:26:45.141 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2463570 ']' 00:26:45.141 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:45.141 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:45.141 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:45.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:45.141 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:45.141 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:45.141 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:45.141 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:45.141 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:26:45.141 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:26:45.141 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:45.141 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:45.141 { 00:26:45.141 "params": { 00:26:45.141 "name": "Nvme$subsystem", 00:26:45.141 "trtype": "$TEST_TRANSPORT", 00:26:45.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:45.141 "adrfam": "ipv4", 00:26:45.141 "trsvcid": "$NVMF_PORT", 00:26:45.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:45.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:45.141 "hdgst": ${hdgst:-false}, 00:26:45.141 "ddgst": ${ddgst:-false} 00:26:45.141 }, 00:26:45.141 "method": "bdev_nvme_attach_controller" 00:26:45.141 } 00:26:45.141 EOF 00:26:45.141 )") 00:26:45.141 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:45.141 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:45.141 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:45.141 { 00:26:45.141 "params": { 00:26:45.141 "name": "Nvme$subsystem", 00:26:45.141 "trtype": "$TEST_TRANSPORT", 00:26:45.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:45.141 "adrfam": "ipv4", 00:26:45.141 "trsvcid": "$NVMF_PORT", 00:26:45.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:45.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:45.141 "hdgst": ${hdgst:-false}, 00:26:45.141 "ddgst": ${ddgst:-false} 00:26:45.141 }, 00:26:45.141 "method": "bdev_nvme_attach_controller" 00:26:45.141 } 00:26:45.141 EOF 00:26:45.141 )") 00:26:45.141 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:45.141 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:45.141 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:45.141 { 00:26:45.141 "params": { 00:26:45.141 "name": "Nvme$subsystem", 00:26:45.141 "trtype": "$TEST_TRANSPORT", 00:26:45.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:45.141 "adrfam": "ipv4", 00:26:45.141 "trsvcid": "$NVMF_PORT", 00:26:45.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:45.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:45.141 "hdgst": ${hdgst:-false}, 00:26:45.141 "ddgst": ${ddgst:-false} 00:26:45.141 }, 00:26:45.141 "method": "bdev_nvme_attach_controller" 00:26:45.141 } 00:26:45.141 EOF 00:26:45.141 )") 00:26:45.141 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:45.141 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:45.141 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:45.141 { 00:26:45.141 "params": { 00:26:45.141 "name": "Nvme$subsystem", 00:26:45.141 "trtype": "$TEST_TRANSPORT", 00:26:45.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:45.141 "adrfam": "ipv4", 00:26:45.141 "trsvcid": "$NVMF_PORT", 00:26:45.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:45.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:45.141 "hdgst": ${hdgst:-false}, 00:26:45.141 "ddgst": ${ddgst:-false} 00:26:45.141 }, 00:26:45.141 "method": "bdev_nvme_attach_controller" 00:26:45.141 } 00:26:45.141 EOF 00:26:45.141 )") 00:26:45.141 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:45.141 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:45.141 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:45.141 { 00:26:45.141 "params": { 00:26:45.141 "name": "Nvme$subsystem", 00:26:45.141 "trtype": "$TEST_TRANSPORT", 00:26:45.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:45.141 "adrfam": "ipv4", 00:26:45.141 "trsvcid": "$NVMF_PORT", 00:26:45.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:45.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:45.141 "hdgst": ${hdgst:-false}, 00:26:45.141 "ddgst": ${ddgst:-false} 00:26:45.141 }, 00:26:45.141 "method": "bdev_nvme_attach_controller" 00:26:45.141 } 00:26:45.141 EOF 00:26:45.141 )") 00:26:45.141 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:45.141 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:45.141 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:45.141 { 00:26:45.141 "params": { 00:26:45.141 "name": "Nvme$subsystem", 00:26:45.141 "trtype": "$TEST_TRANSPORT", 00:26:45.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:45.141 "adrfam": "ipv4", 00:26:45.141 "trsvcid": "$NVMF_PORT", 00:26:45.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:45.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:45.141 "hdgst": ${hdgst:-false}, 00:26:45.141 "ddgst": ${ddgst:-false} 00:26:45.141 }, 00:26:45.141 "method": "bdev_nvme_attach_controller" 00:26:45.141 } 00:26:45.141 EOF 00:26:45.141 )") 00:26:45.141 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:45.141 [2024-11-27 07:21:56.130039] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:26:45.141 [2024-11-27 07:21:56.130093] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2463570 ] 00:26:45.141 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:45.141 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:45.141 { 00:26:45.141 "params": { 00:26:45.141 "name": "Nvme$subsystem", 00:26:45.141 "trtype": "$TEST_TRANSPORT", 00:26:45.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:45.141 "adrfam": "ipv4", 00:26:45.141 "trsvcid": "$NVMF_PORT", 00:26:45.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:45.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:45.141 "hdgst": ${hdgst:-false}, 00:26:45.141 "ddgst": ${ddgst:-false} 00:26:45.141 }, 00:26:45.141 "method": "bdev_nvme_attach_controller" 00:26:45.141 } 00:26:45.141 EOF 00:26:45.141 )") 00:26:45.141 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:45.141 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:45.141 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:45.141 { 00:26:45.141 "params": { 00:26:45.141 "name": "Nvme$subsystem", 00:26:45.141 "trtype": "$TEST_TRANSPORT", 00:26:45.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:45.141 "adrfam": "ipv4", 00:26:45.141 "trsvcid": "$NVMF_PORT", 00:26:45.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:45.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:45.141 "hdgst": ${hdgst:-false}, 00:26:45.141 "ddgst": ${ddgst:-false} 00:26:45.141 }, 00:26:45.141 "method": "bdev_nvme_attach_controller" 00:26:45.141 } 00:26:45.141 EOF 00:26:45.141 )") 00:26:45.141 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:45.141 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:45.141 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:45.141 { 00:26:45.141 "params": { 00:26:45.141 "name": "Nvme$subsystem", 00:26:45.141 "trtype": "$TEST_TRANSPORT", 00:26:45.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:45.141 "adrfam": "ipv4", 00:26:45.141 "trsvcid": "$NVMF_PORT", 00:26:45.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:45.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:45.141 "hdgst": ${hdgst:-false}, 00:26:45.142 "ddgst": ${ddgst:-false} 00:26:45.142 }, 00:26:45.142 "method": "bdev_nvme_attach_controller" 00:26:45.142 } 00:26:45.142 EOF 00:26:45.142 )") 00:26:45.142 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:45.142 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:45.142 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:45.142 { 00:26:45.142 "params": { 00:26:45.142 "name": "Nvme$subsystem", 00:26:45.142 "trtype": "$TEST_TRANSPORT", 00:26:45.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:45.142 "adrfam": "ipv4", 00:26:45.142 "trsvcid": "$NVMF_PORT", 00:26:45.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:45.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:45.142 "hdgst": ${hdgst:-false}, 00:26:45.142 "ddgst": ${ddgst:-false} 00:26:45.142 }, 00:26:45.142 "method": "bdev_nvme_attach_controller" 00:26:45.142 } 00:26:45.142 EOF 00:26:45.142 )") 00:26:45.142 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:45.142 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:26:45.142 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:26:45.142 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:45.142 "params": { 00:26:45.142 "name": "Nvme1", 00:26:45.142 "trtype": "tcp", 00:26:45.142 "traddr": "10.0.0.2", 00:26:45.142 "adrfam": "ipv4", 00:26:45.142 "trsvcid": "4420", 00:26:45.142 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:45.142 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:45.142 "hdgst": false, 00:26:45.142 "ddgst": false 00:26:45.142 }, 00:26:45.142 "method": "bdev_nvme_attach_controller" 00:26:45.142 },{ 00:26:45.142 "params": { 00:26:45.142 "name": "Nvme2", 00:26:45.142 "trtype": "tcp", 00:26:45.142 "traddr": "10.0.0.2", 00:26:45.142 "adrfam": "ipv4", 00:26:45.142 "trsvcid": "4420", 00:26:45.142 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:45.142 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:45.142 "hdgst": false, 00:26:45.142 "ddgst": false 00:26:45.142 }, 00:26:45.142 "method": "bdev_nvme_attach_controller" 00:26:45.142 },{ 00:26:45.142 "params": { 00:26:45.142 "name": "Nvme3", 00:26:45.142 "trtype": "tcp", 00:26:45.142 "traddr": "10.0.0.2", 00:26:45.142 "adrfam": "ipv4", 00:26:45.142 "trsvcid": "4420", 00:26:45.142 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:45.142 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:45.142 "hdgst": false, 00:26:45.142 "ddgst": false 00:26:45.142 }, 00:26:45.142 "method": "bdev_nvme_attach_controller" 00:26:45.142 },{ 00:26:45.142 "params": { 00:26:45.142 "name": "Nvme4", 00:26:45.142 "trtype": "tcp", 00:26:45.142 "traddr": "10.0.0.2", 00:26:45.142 "adrfam": "ipv4", 00:26:45.142 "trsvcid": "4420", 00:26:45.142 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:45.142 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:45.142 "hdgst": false, 00:26:45.142 "ddgst": false 00:26:45.142 }, 00:26:45.142 "method": "bdev_nvme_attach_controller" 00:26:45.142 },{ 00:26:45.142 "params": { 00:26:45.142 "name": "Nvme5", 00:26:45.142 "trtype": "tcp", 00:26:45.142 "traddr": "10.0.0.2", 00:26:45.142 "adrfam": "ipv4", 00:26:45.142 "trsvcid": "4420", 00:26:45.142 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:45.142 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:45.142 "hdgst": false, 00:26:45.142 "ddgst": false 00:26:45.142 }, 00:26:45.142 "method": "bdev_nvme_attach_controller" 00:26:45.142 },{ 00:26:45.142 "params": { 00:26:45.142 "name": "Nvme6", 00:26:45.142 "trtype": "tcp", 00:26:45.142 "traddr": "10.0.0.2", 00:26:45.142 "adrfam": "ipv4", 00:26:45.142 "trsvcid": "4420", 00:26:45.142 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:45.142 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:45.142 "hdgst": false, 00:26:45.142 "ddgst": false 00:26:45.142 }, 00:26:45.142 "method": "bdev_nvme_attach_controller" 00:26:45.142 },{ 00:26:45.142 "params": { 00:26:45.142 "name": "Nvme7", 00:26:45.142 "trtype": "tcp", 00:26:45.142 "traddr": "10.0.0.2", 00:26:45.142 "adrfam": "ipv4", 00:26:45.142 "trsvcid": "4420", 00:26:45.142 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:45.142 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:45.142 "hdgst": false, 00:26:45.142 "ddgst": false 00:26:45.142 }, 00:26:45.142 "method": "bdev_nvme_attach_controller" 00:26:45.142 },{ 00:26:45.142 "params": { 00:26:45.142 "name": "Nvme8", 00:26:45.142 "trtype": "tcp", 00:26:45.142 "traddr": "10.0.0.2", 00:26:45.142 "adrfam": "ipv4", 00:26:45.142 "trsvcid": "4420", 00:26:45.142 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:45.142 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:45.142 "hdgst": false, 00:26:45.142 "ddgst": false 00:26:45.142 }, 00:26:45.142 "method": "bdev_nvme_attach_controller" 00:26:45.142 },{ 00:26:45.142 "params": { 00:26:45.142 "name": "Nvme9", 00:26:45.142 "trtype": "tcp", 00:26:45.142 "traddr": "10.0.0.2", 00:26:45.142 "adrfam": "ipv4", 00:26:45.142 "trsvcid": "4420", 00:26:45.142 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:45.142 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:45.142 "hdgst": false, 00:26:45.142 "ddgst": false 00:26:45.142 }, 00:26:45.142 "method": "bdev_nvme_attach_controller" 00:26:45.142 },{ 00:26:45.142 "params": { 00:26:45.142 "name": "Nvme10", 00:26:45.142 "trtype": "tcp", 00:26:45.142 "traddr": "10.0.0.2", 00:26:45.142 "adrfam": "ipv4", 00:26:45.142 "trsvcid": "4420", 00:26:45.142 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:45.142 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:45.142 "hdgst": false, 00:26:45.142 "ddgst": false 00:26:45.142 }, 00:26:45.142 "method": "bdev_nvme_attach_controller" 00:26:45.142 }' 00:26:45.142 [2024-11-27 07:21:56.218013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.142 [2024-11-27 07:21:56.255607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:47.056 Running I/O for 10 seconds... 00:26:47.056 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:47.056 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:26:47.056 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:47.056 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.056 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:47.056 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.056 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:47.056 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:47.056 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:47.056 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:26:47.056 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:26:47.056 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:26:47.056 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:26:47.056 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:47.056 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:47.056 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:47.056 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.056 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:47.056 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.056 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:26:47.056 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:26:47.056 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:26:47.056 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:26:47.056 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:47.056 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:47.056 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:47.056 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.056 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:47.316 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.316 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:26:47.316 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:26:47.316 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:26:47.593 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:26:47.593 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:47.593 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:47.593 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:47.593 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.593 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:47.593 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.593 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:26:47.593 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:26:47.593 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:26:47.593 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:26:47.593 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:26:47.593 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2463333 00:26:47.593 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2463333 ']' 00:26:47.593 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2463333 00:26:47.593 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:26:47.593 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:47.593 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2463333 00:26:47.593 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:47.593 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:47.593 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2463333' 00:26:47.593 killing process with pid 2463333 00:26:47.593 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2463333 00:26:47.593 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2463333 00:26:47.593 [2024-11-27 07:21:58.661653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a95f0 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.661725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a95f0 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.593 [2024-11-27 07:21:58.668678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.668683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.668687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.668692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.668696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.668700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac070 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.671344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9fb0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.672457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.672479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.672488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.672494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.672499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.672503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.672508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.672513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.594 [2024-11-27 07:21:58.672518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.672762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa4a0 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.673409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.673426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.673431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.673436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.673441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.673446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.673451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.673455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.673460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.673465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.673469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.673474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.673479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.673483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.673488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.673493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.673497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.673502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.673507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.673512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.673517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.673521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.673526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.673535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.673540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.673545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.673550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.673555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.673559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.673564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.673569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.673573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.595 [2024-11-27 07:21:58.673578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.673583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.673587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.673592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.673597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.673602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.673606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.673611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.673616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.673620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.673625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.673629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.673634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.673639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.673643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.673648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.673653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.673658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.673663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.673668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.673673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.673677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.673682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.673687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.673692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.673696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.673701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.673706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.673710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.673715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.673720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa820 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.596 [2024-11-27 07:21:58.674920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.674924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.674929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.674933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.674938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.674942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.674947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.674952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.674956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.674961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.674965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aacf0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.675811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.675826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.675831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.675836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.675841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.675849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.675853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.675858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.675862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.675867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.675871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.675876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.675880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.675885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.675890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.675894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.675899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.675903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.675908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.675912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.675917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.675922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.675926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.675931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.675936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.675941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.675946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.675952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.675957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.675961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.675966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.675971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.675977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.675981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.675986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.675992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.675997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.676001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.676006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.676010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.676015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.676019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.676024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.676029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.676034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.676039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.676043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.676048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.676053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.676057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.676062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.676067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.676071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.676076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.676080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.597 [2024-11-27 07:21:58.676085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.598 [2024-11-27 07:21:58.676090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.598 [2024-11-27 07:21:58.676094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.598 [2024-11-27 07:21:58.676099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.598 [2024-11-27 07:21:58.676105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.598 [2024-11-27 07:21:58.676110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.598 [2024-11-27 07:21:58.676115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.598 [2024-11-27 07:21:58.676119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab1c0 is same with the state(6) to be set 00:26:47.598 [2024-11-27 07:21:58.676302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.598 [2024-11-27 07:21:58.676337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.598 [2024-11-27 07:21:58.676356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.598 [2024-11-27 07:21:58.676364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.598 [2024-11-27 07:21:58.676374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.598 [2024-11-27 07:21:58.676382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.598 [2024-11-27 07:21:58.676392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.598 [2024-11-27 07:21:58.676399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.598 [2024-11-27 07:21:58.676408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.598 [2024-11-27 07:21:58.676416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.598 [2024-11-27 07:21:58.676425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.598 [2024-11-27 07:21:58.676432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.598 [2024-11-27 07:21:58.676442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.598 [2024-11-27 07:21:58.676449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.598 [2024-11-27 07:21:58.676458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.598 [2024-11-27 07:21:58.676465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.598 [2024-11-27 07:21:58.676474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.598 [2024-11-27 07:21:58.676481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.598 [2024-11-27 07:21:58.676491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.598 [2024-11-27 07:21:58.676498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.598 [2024-11-27 07:21:58.676507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.598 [2024-11-27 07:21:58.676514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.598 [2024-11-27 07:21:58.676528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.598 [2024-11-27 07:21:58.676535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.598 [2024-11-27 07:21:58.676544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.598 [2024-11-27 07:21:58.676551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.598 [2024-11-27 07:21:58.676560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.598 [2024-11-27 07:21:58.676567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.598 [2024-11-27 07:21:58.676577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.598 [2024-11-27 07:21:58.676584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.598 [2024-11-27 07:21:58.676593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.598 [2024-11-27 07:21:58.676600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.598 [2024-11-27 07:21:58.676609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.598 [2024-11-27 07:21:58.676616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.598 [2024-11-27 07:21:58.676625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.598 [2024-11-27 07:21:58.676632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.598 [2024-11-27 07:21:58.676641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.598 [2024-11-27 07:21:58.676648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.598 [2024-11-27 07:21:58.676658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.598 [2024-11-27 07:21:58.676665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.598 [2024-11-27 07:21:58.676674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.598 [2024-11-27 07:21:58.676682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.598 [2024-11-27 07:21:58.676691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.598 [2024-11-27 07:21:58.676698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.598 [2024-11-27 07:21:58.676707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.598 [2024-11-27 07:21:58.676714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.598 [2024-11-27 07:21:58.676724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.598 [2024-11-27 07:21:58.676726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.598 [2024-11-27 07:21:58.676733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.598 [2024-11-27 07:21:58.676738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.598 [2024-11-27 07:21:58.676744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with [2024-11-27 07:21:58.676743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:1the state(6) to be set 00:26:47.598 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.598 [2024-11-27 07:21:58.676751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.598 [2024-11-27 07:21:58.676754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.598 [2024-11-27 07:21:58.676757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.598 [2024-11-27 07:21:58.676762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.598 [2024-11-27 07:21:58.676763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.598 [2024-11-27 07:21:58.676767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.598 [2024-11-27 07:21:58.676771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-27 07:21:58.676772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.598 the state(6) to be set 00:26:47.598 [2024-11-27 07:21:58.676780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.598 [2024-11-27 07:21:58.676783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.598 [2024-11-27 07:21:58.676785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.598 [2024-11-27 07:21:58.676791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.598 [2024-11-27 07:21:58.676790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.598 [2024-11-27 07:21:58.676797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.598 [2024-11-27 07:21:58.676802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.598 [2024-11-27 07:21:58.676802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.598 [2024-11-27 07:21:58.676807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.598 [2024-11-27 07:21:58.676810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.598 [2024-11-27 07:21:58.676812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.598 [2024-11-27 07:21:58.676820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.598 [2024-11-27 07:21:58.676820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.598 [2024-11-27 07:21:58.676825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.599 [2024-11-27 07:21:58.676830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with [2024-11-27 07:21:58.676830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:26:47.599 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.599 [2024-11-27 07:21:58.676837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.599 [2024-11-27 07:21:58.676842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with [2024-11-27 07:21:58.676842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:1the state(6) to be set 00:26:47.599 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.599 [2024-11-27 07:21:58.676849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.599 [2024-11-27 07:21:58.676851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.599 [2024-11-27 07:21:58.676855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.599 [2024-11-27 07:21:58.676861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.599 [2024-11-27 07:21:58.676861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.599 [2024-11-27 07:21:58.676866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.599 [2024-11-27 07:21:58.676869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.599 [2024-11-27 07:21:58.676871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.599 [2024-11-27 07:21:58.676877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.599 [2024-11-27 07:21:58.676879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.599 [2024-11-27 07:21:58.676882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.599 [2024-11-27 07:21:58.676887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with [2024-11-27 07:21:58.676887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:26:47.599 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.599 [2024-11-27 07:21:58.676894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.599 [2024-11-27 07:21:58.676898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128[2024-11-27 07:21:58.676899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.599 the state(6) to be set 00:26:47.599 [2024-11-27 07:21:58.676907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.599 [2024-11-27 07:21:58.676908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.599 [2024-11-27 07:21:58.676912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.599 [2024-11-27 07:21:58.676917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with [2024-11-27 07:21:58.676918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128the state(6) to be set 00:26:47.599 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.599 [2024-11-27 07:21:58.676926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.599 [2024-11-27 07:21:58.676929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.599 [2024-11-27 07:21:58.676932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.599 [2024-11-27 07:21:58.676937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.599 [2024-11-27 07:21:58.676939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.599 [2024-11-27 07:21:58.676942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.599 [2024-11-27 07:21:58.676947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-27 07:21:58.676948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.599 the state(6) to be set 00:26:47.599 [2024-11-27 07:21:58.676956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.599 [2024-11-27 07:21:58.676958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.599 [2024-11-27 07:21:58.676960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.599 [2024-11-27 07:21:58.676966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.599 [2024-11-27 07:21:58.676966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.599 [2024-11-27 07:21:58.676971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.599 [2024-11-27 07:21:58.676977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with [2024-11-27 07:21:58.676976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128the state(6) to be set 00:26:47.599 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.599 [2024-11-27 07:21:58.676984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.599 [2024-11-27 07:21:58.676986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.599 [2024-11-27 07:21:58.676989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.599 [2024-11-27 07:21:58.676995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.599 [2024-11-27 07:21:58.676996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.599 [2024-11-27 07:21:58.676999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.599 [2024-11-27 07:21:58.677004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.599 [2024-11-27 07:21:58.677014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.599 [2024-11-27 07:21:58.677021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.599 [2024-11-27 07:21:58.677032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.599 [2024-11-27 07:21:58.677039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.599 [2024-11-27 07:21:58.677048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.599 [2024-11-27 07:21:58.677055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.599 [2024-11-27 07:21:58.677064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.599 [2024-11-27 07:21:58.677072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.599 [2024-11-27 07:21:58.677081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.599 [2024-11-27 07:21:58.677088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.599 [2024-11-27 07:21:58.677097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.599 [2024-11-27 07:21:58.677104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.599 [2024-11-27 07:21:58.677113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.599 [2024-11-27 07:21:58.677120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.599 [2024-11-27 07:21:58.677129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.599 [2024-11-27 07:21:58.677136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.599 [2024-11-27 07:21:58.677145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.599 [2024-11-27 07:21:58.677153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.599 [2024-11-27 07:21:58.677167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.599 [2024-11-27 07:21:58.677175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.599 [2024-11-27 07:21:58.677184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.599 [2024-11-27 07:21:58.677191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.599 [2024-11-27 07:21:58.677200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.599 [2024-11-27 07:21:58.677208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.599 [2024-11-27 07:21:58.677217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.599 [2024-11-27 07:21:58.677224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.599 [2024-11-27 07:21:58.677233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.599 [2024-11-27 07:21:58.677242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.599 [2024-11-27 07:21:58.677251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.599 [2024-11-27 07:21:58.677258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.599 [2024-11-27 07:21:58.677267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.600 [2024-11-27 07:21:58.677274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.600 [2024-11-27 07:21:58.677283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.600 [2024-11-27 07:21:58.677291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.600 [2024-11-27 07:21:58.677300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.600 [2024-11-27 07:21:58.677307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.600 [2024-11-27 07:21:58.677316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.600 [2024-11-27 07:21:58.677323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.600 [2024-11-27 07:21:58.677332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.600 [2024-11-27 07:21:58.677339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.600 [2024-11-27 07:21:58.677348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.600 [2024-11-27 07:21:58.677355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.600 [2024-11-27 07:21:58.677364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.600 [2024-11-27 07:21:58.677371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.600 [2024-11-27 07:21:58.677380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.600 [2024-11-27 07:21:58.677387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.600 [2024-11-27 07:21:58.677396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.600 [2024-11-27 07:21:58.677403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.600 [2024-11-27 07:21:58.677413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.600 [2024-11-27 07:21:58.677420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.600 [2024-11-27 07:21:58.677429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.600 [2024-11-27 07:21:58.677436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.600 [2024-11-27 07:21:58.677465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:47.600 [2024-11-27 07:21:58.677925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.600 [2024-11-27 07:21:58.677947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.600 [2024-11-27 07:21:58.677955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.600 [2024-11-27 07:21:58.677963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.600 [2024-11-27 07:21:58.677971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.600 [2024-11-27 07:21:58.677978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.600 [2024-11-27 07:21:58.677987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.600 [2024-11-27 07:21:58.677994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.600 [2024-11-27 07:21:58.678001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27e7180 is same with the state(6) to be set 00:26:47.600 [2024-11-27 07:21:58.678028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.600 [2024-11-27 07:21:58.678037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.600 [2024-11-27 07:21:58.678045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.600 [2024-11-27 07:21:58.678052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.600 [2024-11-27 07:21:58.678060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.600 [2024-11-27 07:21:58.678067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.600 [2024-11-27 07:21:58.678075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.600 [2024-11-27 07:21:58.678082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.600 [2024-11-27 07:21:58.678089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27e7980 is same with the state(6) to be set 00:26:47.600 [2024-11-27 07:21:58.678113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.600 [2024-11-27 07:21:58.678121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.600 [2024-11-27 07:21:58.678130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.600 [2024-11-27 07:21:58.678137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.600 [2024-11-27 07:21:58.678145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.600 [2024-11-27 07:21:58.678152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.600 [2024-11-27 07:21:58.678167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.600 [2024-11-27 07:21:58.678177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.600 [2024-11-27 07:21:58.678185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2825d50 is same with the state(6) to be set 00:26:47.600 [2024-11-27 07:21:58.678210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.600 [2024-11-27 07:21:58.678218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.600 [2024-11-27 07:21:58.678226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.600 [2024-11-27 07:21:58.678233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.600 [2024-11-27 07:21:58.678243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.600 [2024-11-27 07:21:58.678257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.600 [2024-11-27 07:21:58.678265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.600 [2024-11-27 07:21:58.678272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.600 [2024-11-27 07:21:58.678279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3610 is same with the state(6) to be set 00:26:47.600 [2024-11-27 07:21:58.678311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.600 [2024-11-27 07:21:58.678320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.600 [2024-11-27 07:21:58.678328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.600 [2024-11-27 07:21:58.678335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.600 [2024-11-27 07:21:58.678343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.600 [2024-11-27 07:21:58.678350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.600 [2024-11-27 07:21:58.678358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.600 [2024-11-27 07:21:58.678365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.600 [2024-11-27 07:21:58.678372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27dc6a0 is same with the state(6) to be set 00:26:47.600 [2024-11-27 07:21:58.678397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.600 [2024-11-27 07:21:58.678406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.600 [2024-11-27 07:21:58.678413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.600 [2024-11-27 07:21:58.678421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.600 [2024-11-27 07:21:58.678428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.600 [2024-11-27 07:21:58.678437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.600 [2024-11-27 07:21:58.678445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.600 [2024-11-27 07:21:58.678452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.600 [2024-11-27 07:21:58.678459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bb850 is same with the state(6) to be set 00:26:47.600 [2024-11-27 07:21:58.678479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.600 [2024-11-27 07:21:58.678487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.600 [2024-11-27 07:21:58.678495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.600 [2024-11-27 07:21:58.678502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.601 [2024-11-27 07:21:58.678510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.601 [2024-11-27 07:21:58.678517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.601 [2024-11-27 07:21:58.678525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.601 [2024-11-27 07:21:58.678532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.601 [2024-11-27 07:21:58.678539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b9fc0 is same with the state(6) to be set 00:26:47.601 [2024-11-27 07:21:58.678560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.601 [2024-11-27 07:21:58.678568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.601 [2024-11-27 07:21:58.678579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.601 [2024-11-27 07:21:58.678586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.601 [2024-11-27 07:21:58.678594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.601 [2024-11-27 07:21:58.678601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.601 [2024-11-27 07:21:58.678609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.601 [2024-11-27 07:21:58.678616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.601 [2024-11-27 07:21:58.678623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27dcc90 is same with the state(6) to be set 00:26:47.601 [2024-11-27 07:21:58.678647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.601 [2024-11-27 07:21:58.678655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.601 [2024-11-27 07:21:58.678663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.601 [2024-11-27 07:21:58.678670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.601 [2024-11-27 07:21:58.678680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.601 [2024-11-27 07:21:58.678687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.601 [2024-11-27 07:21:58.678695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.601 [2024-11-27 07:21:58.678702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.601 [2024-11-27 07:21:58.678709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbcc0 is same with the state(6) to be set 00:26:47.601 [2024-11-27 07:21:58.678759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.601 [2024-11-27 07:21:58.678768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.601 [2024-11-27 07:21:58.678780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.601 [2024-11-27 07:21:58.678787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.601 [2024-11-27 07:21:58.678796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.601 [2024-11-27 07:21:58.678804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.601 [2024-11-27 07:21:58.678813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.601 [2024-11-27 07:21:58.678820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.601 [2024-11-27 07:21:58.678829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.601 [2024-11-27 07:21:58.678837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.601 [2024-11-27 07:21:58.678846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.601 [2024-11-27 07:21:58.678853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.601 [2024-11-27 07:21:58.678862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.601 [2024-11-27 07:21:58.678870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.601 [2024-11-27 07:21:58.678879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.601 [2024-11-27 07:21:58.678886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.601 [2024-11-27 07:21:58.678895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.601 [2024-11-27 07:21:58.678902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.601 [2024-11-27 07:21:58.678911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.601 [2024-11-27 07:21:58.678918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.601 [2024-11-27 07:21:58.678930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.601 [2024-11-27 07:21:58.678937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.601 [2024-11-27 07:21:58.678947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.601 [2024-11-27 07:21:58.678954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.601 [2024-11-27 07:21:58.678963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.601 [2024-11-27 07:21:58.678970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.601 [2024-11-27 07:21:58.680389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.601 [2024-11-27 07:21:58.680419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.601 [2024-11-27 07:21:58.680431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.601 [2024-11-27 07:21:58.680438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.601 [2024-11-27 07:21:58.680460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.601 [2024-11-27 07:21:58.680508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.601 [2024-11-27 07:21:58.680563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.601 [2024-11-27 07:21:58.680614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.601 [2024-11-27 07:21:58.680668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.601 [2024-11-27 07:21:58.680720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.601 [2024-11-27 07:21:58.680773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.601 [2024-11-27 07:21:58.680822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.601 [2024-11-27 07:21:58.680888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.601 [2024-11-27 07:21:58.680937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.601 [2024-11-27 07:21:58.681000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.601 [2024-11-27 07:21:58.681051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.601 [2024-11-27 07:21:58.681108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.601 [2024-11-27 07:21:58.681381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.601 [2024-11-27 07:21:58.681394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.601 [2024-11-27 07:21:58.681408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.601 [2024-11-27 07:21:58.681418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.601 [2024-11-27 07:21:58.681426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.601 [2024-11-27 07:21:58.681448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.601 [2024-11-27 07:21:58.681497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.601 [2024-11-27 07:21:58.681553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.601 [2024-11-27 07:21:58.681603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.601 [2024-11-27 07:21:58.681655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.601 [2024-11-27 07:21:58.681705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.601 [2024-11-27 07:21:58.681757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.601 [2024-11-27 07:21:58.681810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.602 [2024-11-27 07:21:58.681863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.602 [2024-11-27 07:21:58.681914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.602 [2024-11-27 07:21:58.681969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.602 [2024-11-27 07:21:58.682018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.602 [2024-11-27 07:21:58.682073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.602 [2024-11-27 07:21:58.682129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.602 [2024-11-27 07:21:58.682203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.602 [2024-11-27 07:21:58.682255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.602 [2024-11-27 07:21:58.682310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.602 [2024-11-27 07:21:58.682364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.602 [2024-11-27 07:21:58.682418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.602 [2024-11-27 07:21:58.682470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.602 [2024-11-27 07:21:58.689629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.689655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.689669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.689676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.689683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.689690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.689696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.689702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.689708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab6b0 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.602 [2024-11-27 07:21:58.690550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.603 [2024-11-27 07:21:58.690555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.603 [2024-11-27 07:21:58.690559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.603 [2024-11-27 07:21:58.690564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.603 [2024-11-27 07:21:58.690568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.603 [2024-11-27 07:21:58.690573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.603 [2024-11-27 07:21:58.690579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.603 [2024-11-27 07:21:58.690583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.603 [2024-11-27 07:21:58.690588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.603 [2024-11-27 07:21:58.690593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.603 [2024-11-27 07:21:58.690598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abb80 is same with the state(6) to be set 00:26:47.603 [2024-11-27 07:21:58.699080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.603 [2024-11-27 07:21:58.699113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.603 [2024-11-27 07:21:58.699127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.603 [2024-11-27 07:21:58.699137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.603 [2024-11-27 07:21:58.699149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.603 [2024-11-27 07:21:58.699156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.603 [2024-11-27 07:21:58.699175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.603 [2024-11-27 07:21:58.699182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.603 [2024-11-27 07:21:58.699193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.603 [2024-11-27 07:21:58.699200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.603 [2024-11-27 07:21:58.699210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.603 [2024-11-27 07:21:58.699223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.603 [2024-11-27 07:21:58.699232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.603 [2024-11-27 07:21:58.699240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.603 [2024-11-27 07:21:58.699249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.603 [2024-11-27 07:21:58.699256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.603 [2024-11-27 07:21:58.699266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.603 [2024-11-27 07:21:58.699273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.603 [2024-11-27 07:21:58.699283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.603 [2024-11-27 07:21:58.699291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.603 [2024-11-27 07:21:58.699301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.603 [2024-11-27 07:21:58.699308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.603 [2024-11-27 07:21:58.699318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.603 [2024-11-27 07:21:58.699325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.603 [2024-11-27 07:21:58.699334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.603 [2024-11-27 07:21:58.699342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.603 [2024-11-27 07:21:58.699352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.603 [2024-11-27 07:21:58.699359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.603 [2024-11-27 07:21:58.699369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.603 [2024-11-27 07:21:58.699377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.603 [2024-11-27 07:21:58.699386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.603 [2024-11-27 07:21:58.699394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.603 [2024-11-27 07:21:58.699403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.603 [2024-11-27 07:21:58.699411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.603 [2024-11-27 07:21:58.699420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.603 [2024-11-27 07:21:58.699428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.603 [2024-11-27 07:21:58.699439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.603 [2024-11-27 07:21:58.699447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.603 [2024-11-27 07:21:58.699456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.603 [2024-11-27 07:21:58.699464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.603 [2024-11-27 07:21:58.699474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.603 [2024-11-27 07:21:58.699482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.603 [2024-11-27 07:21:58.699491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.603 [2024-11-27 07:21:58.699498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.603 [2024-11-27 07:21:58.699509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.603 [2024-11-27 07:21:58.699516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.603 [2024-11-27 07:21:58.699526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.603 [2024-11-27 07:21:58.699533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.603 [2024-11-27 07:21:58.699543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.603 [2024-11-27 07:21:58.699550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.603 [2024-11-27 07:21:58.699560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.603 [2024-11-27 07:21:58.699567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.603 [2024-11-27 07:21:58.699576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.603 [2024-11-27 07:21:58.699584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.603 [2024-11-27 07:21:58.699593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.603 [2024-11-27 07:21:58.699602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.603 [2024-11-27 07:21:58.699611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.603 [2024-11-27 07:21:58.699618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.603 [2024-11-27 07:21:58.699628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.603 [2024-11-27 07:21:58.699635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.603 [2024-11-27 07:21:58.700999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.603 [2024-11-27 07:21:58.701021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.603 [2024-11-27 07:21:58.701038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.603 [2024-11-27 07:21:58.701048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.603 [2024-11-27 07:21:58.701060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.603 [2024-11-27 07:21:58.701069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.603 [2024-11-27 07:21:58.701080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.603 [2024-11-27 07:21:58.701089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.603 [2024-11-27 07:21:58.701100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.604 [2024-11-27 07:21:58.701108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.604 [2024-11-27 07:21:58.701118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.604 [2024-11-27 07:21:58.701125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.604 [2024-11-27 07:21:58.701135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.604 [2024-11-27 07:21:58.701142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.604 [2024-11-27 07:21:58.701152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.604 [2024-11-27 07:21:58.701164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.604 [2024-11-27 07:21:58.701174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.604 [2024-11-27 07:21:58.701182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.604 [2024-11-27 07:21:58.701191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.604 [2024-11-27 07:21:58.701199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.604 [2024-11-27 07:21:58.701209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.604 [2024-11-27 07:21:58.701216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.604 [2024-11-27 07:21:58.701226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.604 [2024-11-27 07:21:58.701233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.604 [2024-11-27 07:21:58.701242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.604 [2024-11-27 07:21:58.701250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.604 [2024-11-27 07:21:58.701261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.604 [2024-11-27 07:21:58.701269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.604 [2024-11-27 07:21:58.701278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.604 [2024-11-27 07:21:58.701286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.604 [2024-11-27 07:21:58.701295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.604 [2024-11-27 07:21:58.701303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.604 [2024-11-27 07:21:58.701312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.604 [2024-11-27 07:21:58.701319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.604 [2024-11-27 07:21:58.701329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.604 [2024-11-27 07:21:58.701336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.604 [2024-11-27 07:21:58.701346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.604 [2024-11-27 07:21:58.701354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.604 [2024-11-27 07:21:58.701364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.604 [2024-11-27 07:21:58.701371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.604 [2024-11-27 07:21:58.701380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.604 [2024-11-27 07:21:58.701388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.604 [2024-11-27 07:21:58.701397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.604 [2024-11-27 07:21:58.701404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.604 [2024-11-27 07:21:58.701413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.604 [2024-11-27 07:21:58.701421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.604 [2024-11-27 07:21:58.701431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.604 [2024-11-27 07:21:58.701438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.604 [2024-11-27 07:21:58.701448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.604 [2024-11-27 07:21:58.701456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.604 [2024-11-27 07:21:58.701465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.604 [2024-11-27 07:21:58.701474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.604 [2024-11-27 07:21:58.701484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.604 [2024-11-27 07:21:58.701491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.604 [2024-11-27 07:21:58.701500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.604 [2024-11-27 07:21:58.701508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.604 [2024-11-27 07:21:58.701518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.604 [2024-11-27 07:21:58.701526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.604 [2024-11-27 07:21:58.701535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.604 [2024-11-27 07:21:58.701542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.604 [2024-11-27 07:21:58.701552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.604 [2024-11-27 07:21:58.701559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.604 [2024-11-27 07:21:58.701569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.604 [2024-11-27 07:21:58.701576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.604 [2024-11-27 07:21:58.701586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.604 [2024-11-27 07:21:58.701593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.604 [2024-11-27 07:21:58.701603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.604 [2024-11-27 07:21:58.701610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.605 [2024-11-27 07:21:58.701619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.605 [2024-11-27 07:21:58.701627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.605 [2024-11-27 07:21:58.701636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.605 [2024-11-27 07:21:58.701643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.605 [2024-11-27 07:21:58.701652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.605 [2024-11-27 07:21:58.701659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.605 [2024-11-27 07:21:58.701669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.605 [2024-11-27 07:21:58.701676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.605 [2024-11-27 07:21:58.701687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.605 [2024-11-27 07:21:58.701694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.605 [2024-11-27 07:21:58.701703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.605 [2024-11-27 07:21:58.701710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.605 [2024-11-27 07:21:58.701720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.605 [2024-11-27 07:21:58.701727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.605 [2024-11-27 07:21:58.701736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.605 [2024-11-27 07:21:58.701743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.605 [2024-11-27 07:21:58.701753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.605 [2024-11-27 07:21:58.701760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.605 [2024-11-27 07:21:58.701769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.605 [2024-11-27 07:21:58.701776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.605 [2024-11-27 07:21:58.701786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.605 [2024-11-27 07:21:58.701793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.605 [2024-11-27 07:21:58.701803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.605 [2024-11-27 07:21:58.701810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.605 [2024-11-27 07:21:58.701820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.605 [2024-11-27 07:21:58.701827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.605 [2024-11-27 07:21:58.701836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.605 [2024-11-27 07:21:58.701843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.605 [2024-11-27 07:21:58.701853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.605 [2024-11-27 07:21:58.701860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.605 [2024-11-27 07:21:58.701870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.605 [2024-11-27 07:21:58.701878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.605 [2024-11-27 07:21:58.701888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.605 [2024-11-27 07:21:58.701897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.605 [2024-11-27 07:21:58.701906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.605 [2024-11-27 07:21:58.701913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.605 [2024-11-27 07:21:58.701922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.605 [2024-11-27 07:21:58.701930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.605 [2024-11-27 07:21:58.701940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.605 [2024-11-27 07:21:58.701947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.605 [2024-11-27 07:21:58.701957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.605 [2024-11-27 07:21:58.701964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.605 [2024-11-27 07:21:58.701974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.605 [2024-11-27 07:21:58.701981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.605 [2024-11-27 07:21:58.701990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.605 [2024-11-27 07:21:58.701998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.605 [2024-11-27 07:21:58.702007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.605 [2024-11-27 07:21:58.702015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.605 [2024-11-27 07:21:58.702024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.605 [2024-11-27 07:21:58.702032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.605 [2024-11-27 07:21:58.702041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.605 [2024-11-27 07:21:58.702049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.605 [2024-11-27 07:21:58.702058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.605 [2024-11-27 07:21:58.702065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.605 [2024-11-27 07:21:58.702075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.605 [2024-11-27 07:21:58.702083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.605 [2024-11-27 07:21:58.702092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.605 [2024-11-27 07:21:58.702100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.606 [2024-11-27 07:21:58.702110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.606 [2024-11-27 07:21:58.702118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.606 [2024-11-27 07:21:58.702570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:26:47.606 [2024-11-27 07:21:58.702611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bb850 (9): Bad file descriptor 00:26:47.606 [2024-11-27 07:21:58.702646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27e7180 (9): Bad file descriptor 00:26:47.606 [2024-11-27 07:21:58.702667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27e7980 (9): Bad file descriptor 00:26:47.606 [2024-11-27 07:21:58.702686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2825d50 (9): Bad file descriptor 00:26:47.606 [2024-11-27 07:21:58.702702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3610 (9): Bad file descriptor 00:26:47.606 [2024-11-27 07:21:58.702731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.606 [2024-11-27 07:21:58.702741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.606 [2024-11-27 07:21:58.702750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.606 [2024-11-27 07:21:58.702757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.606 [2024-11-27 07:21:58.702766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.606 [2024-11-27 07:21:58.702774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.606 [2024-11-27 07:21:58.702781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.606 [2024-11-27 07:21:58.702789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.606 [2024-11-27 07:21:58.702796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2833bb0 is same with the state(6) to be set 00:26:47.606 [2024-11-27 07:21:58.702815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27dc6a0 (9): Bad file descriptor 00:26:47.606 [2024-11-27 07:21:58.702830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b9fc0 (9): Bad file descriptor 00:26:47.606 [2024-11-27 07:21:58.702847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27dcc90 (9): Bad file descriptor 00:26:47.606 [2024-11-27 07:21:58.702861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbcc0 (9): Bad file descriptor 00:26:47.606 [2024-11-27 07:21:58.705406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.606 [2024-11-27 07:21:58.705426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.606 [2024-11-27 07:21:58.705440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.606 [2024-11-27 07:21:58.705450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.606 [2024-11-27 07:21:58.705461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.606 [2024-11-27 07:21:58.705474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.606 [2024-11-27 07:21:58.705486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.606 [2024-11-27 07:21:58.705495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.606 [2024-11-27 07:21:58.705506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.606 [2024-11-27 07:21:58.705516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.606 [2024-11-27 07:21:58.705528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.606 [2024-11-27 07:21:58.705536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.606 [2024-11-27 07:21:58.705545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.606 [2024-11-27 07:21:58.705552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.606 [2024-11-27 07:21:58.705562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.606 [2024-11-27 07:21:58.705569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.606 [2024-11-27 07:21:58.705579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.606 [2024-11-27 07:21:58.705587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.606 [2024-11-27 07:21:58.705596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.606 [2024-11-27 07:21:58.705604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.606 [2024-11-27 07:21:58.705613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.606 [2024-11-27 07:21:58.705620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.606 [2024-11-27 07:21:58.705630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.606 [2024-11-27 07:21:58.705637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.606 [2024-11-27 07:21:58.705647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.606 [2024-11-27 07:21:58.705655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.606 [2024-11-27 07:21:58.705665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.606 [2024-11-27 07:21:58.705672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.606 [2024-11-27 07:21:58.705682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.606 [2024-11-27 07:21:58.705690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.606 [2024-11-27 07:21:58.705703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.606 [2024-11-27 07:21:58.705711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.606 [2024-11-27 07:21:58.705721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.606 [2024-11-27 07:21:58.705728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.606 [2024-11-27 07:21:58.705738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.606 [2024-11-27 07:21:58.705746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.606 [2024-11-27 07:21:58.705755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.606 [2024-11-27 07:21:58.705762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.606 [2024-11-27 07:21:58.705771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.606 [2024-11-27 07:21:58.705779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.606 [2024-11-27 07:21:58.705788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.606 [2024-11-27 07:21:58.705796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.607 [2024-11-27 07:21:58.705805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.607 [2024-11-27 07:21:58.705813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.607 [2024-11-27 07:21:58.705823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.607 [2024-11-27 07:21:58.705830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.607 [2024-11-27 07:21:58.705839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.607 [2024-11-27 07:21:58.705847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.607 [2024-11-27 07:21:58.705856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.607 [2024-11-27 07:21:58.705864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.607 [2024-11-27 07:21:58.705874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.607 [2024-11-27 07:21:58.705881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.607 [2024-11-27 07:21:58.705890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.607 [2024-11-27 07:21:58.705898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.607 [2024-11-27 07:21:58.705907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.607 [2024-11-27 07:21:58.705916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.607 [2024-11-27 07:21:58.705926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.607 [2024-11-27 07:21:58.705933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.607 [2024-11-27 07:21:58.705942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.607 [2024-11-27 07:21:58.705949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.607 [2024-11-27 07:21:58.705959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.607 [2024-11-27 07:21:58.705966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.607 [2024-11-27 07:21:58.705975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.607 [2024-11-27 07:21:58.705982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.607 [2024-11-27 07:21:58.705992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.607 [2024-11-27 07:21:58.705999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.607 [2024-11-27 07:21:58.706008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.607 [2024-11-27 07:21:58.706016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.607 [2024-11-27 07:21:58.706025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.607 [2024-11-27 07:21:58.706032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.607 [2024-11-27 07:21:58.706042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.607 [2024-11-27 07:21:58.706049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.607 [2024-11-27 07:21:58.706058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.607 [2024-11-27 07:21:58.706066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.607 [2024-11-27 07:21:58.706075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.607 [2024-11-27 07:21:58.706083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.607 [2024-11-27 07:21:58.706093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.607 [2024-11-27 07:21:58.706100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.607 [2024-11-27 07:21:58.706109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.607 [2024-11-27 07:21:58.706117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.607 [2024-11-27 07:21:58.706127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.607 [2024-11-27 07:21:58.706135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.607 [2024-11-27 07:21:58.706144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.607 [2024-11-27 07:21:58.706152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.607 [2024-11-27 07:21:58.706166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.607 [2024-11-27 07:21:58.706174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.607 [2024-11-27 07:21:58.706183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.607 [2024-11-27 07:21:58.706191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.607 [2024-11-27 07:21:58.706201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.607 [2024-11-27 07:21:58.706208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.607 [2024-11-27 07:21:58.706218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.607 [2024-11-27 07:21:58.706225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.608 [2024-11-27 07:21:58.706235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.608 [2024-11-27 07:21:58.706242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.608 [2024-11-27 07:21:58.706251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.608 [2024-11-27 07:21:58.706259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.608 [2024-11-27 07:21:58.706268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.608 [2024-11-27 07:21:58.706276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.608 [2024-11-27 07:21:58.706285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.608 [2024-11-27 07:21:58.706292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.608 [2024-11-27 07:21:58.706302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.608 [2024-11-27 07:21:58.706309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.608 [2024-11-27 07:21:58.706318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.608 [2024-11-27 07:21:58.706325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.608 [2024-11-27 07:21:58.706335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.608 [2024-11-27 07:21:58.706344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.608 [2024-11-27 07:21:58.706354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.608 [2024-11-27 07:21:58.706362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.608 [2024-11-27 07:21:58.706371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.608 [2024-11-27 07:21:58.706378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.608 [2024-11-27 07:21:58.706388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.608 [2024-11-27 07:21:58.706395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.608 [2024-11-27 07:21:58.706404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.608 [2024-11-27 07:21:58.706412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.608 [2024-11-27 07:21:58.706421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.608 [2024-11-27 07:21:58.706429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.608 [2024-11-27 07:21:58.706438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.608 [2024-11-27 07:21:58.706446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.608 [2024-11-27 07:21:58.706456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.608 [2024-11-27 07:21:58.706463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.608 [2024-11-27 07:21:58.706472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.608 [2024-11-27 07:21:58.706479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.608 [2024-11-27 07:21:58.706489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.608 [2024-11-27 07:21:58.706496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.608 [2024-11-27 07:21:58.706505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.608 [2024-11-27 07:21:58.706514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.608 [2024-11-27 07:21:58.706523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.608 [2024-11-27 07:21:58.706531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.608 [2024-11-27 07:21:58.706689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:47.608 [2024-11-27 07:21:58.706709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:26:47.608 [2024-11-27 07:21:58.708879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.608 [2024-11-27 07:21:58.708907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bb850 with addr=10.0.0.2, port=4420 00:26:47.608 [2024-11-27 07:21:58.708917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bb850 is same with the state(6) to be set 00:26:47.608 [2024-11-27 07:21:58.709363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.608 [2024-11-27 07:21:58.709401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbcc0 with addr=10.0.0.2, port=4420 00:26:47.608 [2024-11-27 07:21:58.709414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbcc0 is same with the state(6) to be set 00:26:47.608 [2024-11-27 07:21:58.709639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.608 [2024-11-27 07:21:58.709654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27e7980 with addr=10.0.0.2, port=4420 00:26:47.608 [2024-11-27 07:21:58.709662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27e7980 is same with the state(6) to be set 00:26:47.608 [2024-11-27 07:21:58.710011] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:47.608 [2024-11-27 07:21:58.710331] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:47.608 [2024-11-27 07:21:58.710375] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:47.608 [2024-11-27 07:21:58.710410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.608 [2024-11-27 07:21:58.710421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.608 [2024-11-27 07:21:58.710439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.608 [2024-11-27 07:21:58.710447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.608 [2024-11-27 07:21:58.710458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.608 [2024-11-27 07:21:58.710465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.608 [2024-11-27 07:21:58.710476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.608 [2024-11-27 07:21:58.710483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.608 [2024-11-27 07:21:58.710493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.608 [2024-11-27 07:21:58.710500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.608 [2024-11-27 07:21:58.710510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.608 [2024-11-27 07:21:58.710517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.608 [2024-11-27 07:21:58.710527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.608 [2024-11-27 07:21:58.710535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.608 [2024-11-27 07:21:58.710546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.608 [2024-11-27 07:21:58.710553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.608 [2024-11-27 07:21:58.710562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.608 [2024-11-27 07:21:58.710575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.608 [2024-11-27 07:21:58.710585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.608 [2024-11-27 07:21:58.710593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.608 [2024-11-27 07:21:58.710603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.608 [2024-11-27 07:21:58.710611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.608 [2024-11-27 07:21:58.710620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.608 [2024-11-27 07:21:58.710627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.609 [2024-11-27 07:21:58.710637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.609 [2024-11-27 07:21:58.710644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.609 [2024-11-27 07:21:58.710654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.609 [2024-11-27 07:21:58.710661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.609 [2024-11-27 07:21:58.710671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.609 [2024-11-27 07:21:58.710679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.609 [2024-11-27 07:21:58.710689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.609 [2024-11-27 07:21:58.710696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.609 [2024-11-27 07:21:58.710706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.609 [2024-11-27 07:21:58.710713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.609 [2024-11-27 07:21:58.710723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.609 [2024-11-27 07:21:58.710731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.609 [2024-11-27 07:21:58.710740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.609 [2024-11-27 07:21:58.710748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.609 [2024-11-27 07:21:58.710757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.609 [2024-11-27 07:21:58.710765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.609 [2024-11-27 07:21:58.710774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.609 [2024-11-27 07:21:58.710782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.609 [2024-11-27 07:21:58.710794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.609 [2024-11-27 07:21:58.710802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.609 [2024-11-27 07:21:58.710811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.609 [2024-11-27 07:21:58.710819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.609 [2024-11-27 07:21:58.710828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.609 [2024-11-27 07:21:58.710836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.609 [2024-11-27 07:21:58.710846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.609 [2024-11-27 07:21:58.710853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.609 [2024-11-27 07:21:58.710863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.609 [2024-11-27 07:21:58.710871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.609 [2024-11-27 07:21:58.710880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.609 [2024-11-27 07:21:58.710887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.609 [2024-11-27 07:21:58.710898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.609 [2024-11-27 07:21:58.710905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.609 [2024-11-27 07:21:58.710916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.609 [2024-11-27 07:21:58.710923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.609 [2024-11-27 07:21:58.710933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.609 [2024-11-27 07:21:58.710940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.609 [2024-11-27 07:21:58.710950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.609 [2024-11-27 07:21:58.710957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.609 [2024-11-27 07:21:58.710967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.609 [2024-11-27 07:21:58.710975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.609 [2024-11-27 07:21:58.710984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.609 [2024-11-27 07:21:58.710991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.609 [2024-11-27 07:21:58.711001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.609 [2024-11-27 07:21:58.711010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.609 [2024-11-27 07:21:58.711020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.609 [2024-11-27 07:21:58.711028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.609 [2024-11-27 07:21:58.711037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.609 [2024-11-27 07:21:58.711045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.609 [2024-11-27 07:21:58.711055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.609 [2024-11-27 07:21:58.711062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.609 [2024-11-27 07:21:58.711071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.609 [2024-11-27 07:21:58.711079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.609 [2024-11-27 07:21:58.711089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.609 [2024-11-27 07:21:58.711097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.609 [2024-11-27 07:21:58.711106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.609 [2024-11-27 07:21:58.711114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.609 [2024-11-27 07:21:58.711124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.609 [2024-11-27 07:21:58.711131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.609 [2024-11-27 07:21:58.711141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.609 [2024-11-27 07:21:58.711149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.609 [2024-11-27 07:21:58.711164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.609 [2024-11-27 07:21:58.711172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.609 [2024-11-27 07:21:58.711182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.609 [2024-11-27 07:21:58.711189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.609 [2024-11-27 07:21:58.711199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.610 [2024-11-27 07:21:58.711206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.610 [2024-11-27 07:21:58.711216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.610 [2024-11-27 07:21:58.711224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.610 [2024-11-27 07:21:58.711235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.610 [2024-11-27 07:21:58.711242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.610 [2024-11-27 07:21:58.711252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.610 [2024-11-27 07:21:58.711259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.610 [2024-11-27 07:21:58.711269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.610 [2024-11-27 07:21:58.711276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.610 [2024-11-27 07:21:58.711286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.610 [2024-11-27 07:21:58.711294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.610 [2024-11-27 07:21:58.711304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.610 [2024-11-27 07:21:58.711311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.610 [2024-11-27 07:21:58.711321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.610 [2024-11-27 07:21:58.711328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.610 [2024-11-27 07:21:58.711337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.610 [2024-11-27 07:21:58.711345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.610 [2024-11-27 07:21:58.711355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.610 [2024-11-27 07:21:58.711362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.610 [2024-11-27 07:21:58.711372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.610 [2024-11-27 07:21:58.711379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.610 [2024-11-27 07:21:58.711389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.610 [2024-11-27 07:21:58.711396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.610 [2024-11-27 07:21:58.711406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.610 [2024-11-27 07:21:58.711413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.610 [2024-11-27 07:21:58.711423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.610 [2024-11-27 07:21:58.711431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.610 [2024-11-27 07:21:58.711440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.610 [2024-11-27 07:21:58.711449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.610 [2024-11-27 07:21:58.711459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.610 [2024-11-27 07:21:58.711466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.610 [2024-11-27 07:21:58.711476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.610 [2024-11-27 07:21:58.711484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.610 [2024-11-27 07:21:58.711493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.610 [2024-11-27 07:21:58.711501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.610 [2024-11-27 07:21:58.711510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.610 [2024-11-27 07:21:58.711518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.610 [2024-11-27 07:21:58.711528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.610 [2024-11-27 07:21:58.711535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.610 [2024-11-27 07:21:58.711544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27c1f20 is same with the state(6) to be set 00:26:47.610 [2024-11-27 07:21:58.711678] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:47.610 [2024-11-27 07:21:58.711701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:26:47.610 [2024-11-27 07:21:58.711726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bb850 (9): Bad file descriptor 00:26:47.610 [2024-11-27 07:21:58.711737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbcc0 (9): Bad file descriptor 00:26:47.610 [2024-11-27 07:21:58.711746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27e7980 (9): Bad file descriptor 00:26:47.610 [2024-11-27 07:21:58.713333] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:47.610 [2024-11-27 07:21:58.713363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:26:47.610 [2024-11-27 07:21:58.713731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.610 [2024-11-27 07:21:58.713747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27dc6a0 with addr=10.0.0.2, port=4420 00:26:47.610 [2024-11-27 07:21:58.713755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27dc6a0 is same with the state(6) to be set 00:26:47.610 [2024-11-27 07:21:58.713764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:26:47.610 [2024-11-27 07:21:58.713772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:26:47.610 [2024-11-27 07:21:58.713781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:26:47.610 [2024-11-27 07:21:58.713789] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:26:47.610 [2024-11-27 07:21:58.713798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:47.610 [2024-11-27 07:21:58.713808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:47.610 [2024-11-27 07:21:58.713815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:47.610 [2024-11-27 07:21:58.713822] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:47.610 [2024-11-27 07:21:58.713830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:26:47.610 [2024-11-27 07:21:58.713837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:26:47.610 [2024-11-27 07:21:58.713844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:26:47.610 [2024-11-27 07:21:58.713851] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:26:47.610 [2024-11-27 07:21:58.713879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2833bb0 (9): Bad file descriptor 00:26:47.610 [2024-11-27 07:21:58.714433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.610 [2024-11-27 07:21:58.714472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3610 with addr=10.0.0.2, port=4420 00:26:47.610 [2024-11-27 07:21:58.714484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3610 is same with the state(6) to be set 00:26:47.610 [2024-11-27 07:21:58.714499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27dc6a0 (9): Bad file descriptor 00:26:47.611 [2024-11-27 07:21:58.714545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.611 [2024-11-27 07:21:58.714555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.611 [2024-11-27 07:21:58.714571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.611 [2024-11-27 07:21:58.714580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.611 [2024-11-27 07:21:58.714590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.611 [2024-11-27 07:21:58.714597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.611 [2024-11-27 07:21:58.714607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.611 [2024-11-27 07:21:58.714615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.611 [2024-11-27 07:21:58.714624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.611 [2024-11-27 07:21:58.714631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.611 [2024-11-27 07:21:58.714641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.611 [2024-11-27 07:21:58.714649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.611 [2024-11-27 07:21:58.714658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.611 [2024-11-27 07:21:58.714666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.611 [2024-11-27 07:21:58.714676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.611 [2024-11-27 07:21:58.714688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.611 [2024-11-27 07:21:58.714697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.611 [2024-11-27 07:21:58.714705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.611 [2024-11-27 07:21:58.714715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.611 [2024-11-27 07:21:58.714723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.611 [2024-11-27 07:21:58.714732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.611 [2024-11-27 07:21:58.714739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.611 [2024-11-27 07:21:58.714749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.611 [2024-11-27 07:21:58.714756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.611 [2024-11-27 07:21:58.714766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.611 [2024-11-27 07:21:58.714774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.611 [2024-11-27 07:21:58.714783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.611 [2024-11-27 07:21:58.714790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.611 [2024-11-27 07:21:58.714800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.611 [2024-11-27 07:21:58.714807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.611 [2024-11-27 07:21:58.714817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.611 [2024-11-27 07:21:58.714824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.611 [2024-11-27 07:21:58.714834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.611 [2024-11-27 07:21:58.714841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.611 [2024-11-27 07:21:58.714851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.611 [2024-11-27 07:21:58.714858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.611 [2024-11-27 07:21:58.714868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.611 [2024-11-27 07:21:58.714875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.611 [2024-11-27 07:21:58.714884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.611 [2024-11-27 07:21:58.714892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.611 [2024-11-27 07:21:58.714903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.611 [2024-11-27 07:21:58.714911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.611 [2024-11-27 07:21:58.714921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.611 [2024-11-27 07:21:58.714928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.611 [2024-11-27 07:21:58.714937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.611 [2024-11-27 07:21:58.714944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.611 [2024-11-27 07:21:58.714953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.611 [2024-11-27 07:21:58.714961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.611 [2024-11-27 07:21:58.714970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.611 [2024-11-27 07:21:58.714978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.611 [2024-11-27 07:21:58.714988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.611 [2024-11-27 07:21:58.714995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.611 [2024-11-27 07:21:58.715005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.611 [2024-11-27 07:21:58.715012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.611 [2024-11-27 07:21:58.715022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.611 [2024-11-27 07:21:58.715029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.611 [2024-11-27 07:21:58.715039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.611 [2024-11-27 07:21:58.715046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.612 [2024-11-27 07:21:58.715056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.612 [2024-11-27 07:21:58.715063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.612 [2024-11-27 07:21:58.715073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.612 [2024-11-27 07:21:58.715080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.612 [2024-11-27 07:21:58.715089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.612 [2024-11-27 07:21:58.715097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.612 [2024-11-27 07:21:58.715107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.612 [2024-11-27 07:21:58.715116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.612 [2024-11-27 07:21:58.715126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.612 [2024-11-27 07:21:58.715133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.612 [2024-11-27 07:21:58.715143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.612 [2024-11-27 07:21:58.715150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.612 [2024-11-27 07:21:58.715170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.612 [2024-11-27 07:21:58.715178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.612 [2024-11-27 07:21:58.715188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.612 [2024-11-27 07:21:58.715195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.612 [2024-11-27 07:21:58.715204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.612 [2024-11-27 07:21:58.715212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.612 [2024-11-27 07:21:58.715221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.612 [2024-11-27 07:21:58.715229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.612 [2024-11-27 07:21:58.715239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.612 [2024-11-27 07:21:58.715246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.612 [2024-11-27 07:21:58.715255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.612 [2024-11-27 07:21:58.715263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.612 [2024-11-27 07:21:58.715272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.612 [2024-11-27 07:21:58.715280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.612 [2024-11-27 07:21:58.715289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.612 [2024-11-27 07:21:58.715297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.612 [2024-11-27 07:21:58.715307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.612 [2024-11-27 07:21:58.715314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.612 [2024-11-27 07:21:58.715323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.612 [2024-11-27 07:21:58.715330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.612 [2024-11-27 07:21:58.715342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.612 [2024-11-27 07:21:58.715350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.612 [2024-11-27 07:21:58.715360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.612 [2024-11-27 07:21:58.715368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.612 [2024-11-27 07:21:58.715377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.612 [2024-11-27 07:21:58.715385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.612 [2024-11-27 07:21:58.715394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.612 [2024-11-27 07:21:58.715402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.612 [2024-11-27 07:21:58.715411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.612 [2024-11-27 07:21:58.715419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.612 [2024-11-27 07:21:58.715428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.612 [2024-11-27 07:21:58.715435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.612 [2024-11-27 07:21:58.715445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.612 [2024-11-27 07:21:58.715452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.612 [2024-11-27 07:21:58.715461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.612 [2024-11-27 07:21:58.715469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.612 [2024-11-27 07:21:58.715479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.612 [2024-11-27 07:21:58.715487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.612 [2024-11-27 07:21:58.715496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.612 [2024-11-27 07:21:58.715504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.612 [2024-11-27 07:21:58.715513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.612 [2024-11-27 07:21:58.715520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.612 [2024-11-27 07:21:58.715530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.612 [2024-11-27 07:21:58.715538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.612 [2024-11-27 07:21:58.715548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.612 [2024-11-27 07:21:58.715559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.612 [2024-11-27 07:21:58.715569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.613 [2024-11-27 07:21:58.715576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.613 [2024-11-27 07:21:58.715586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.613 [2024-11-27 07:21:58.715593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.613 [2024-11-27 07:21:58.715603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.613 [2024-11-27 07:21:58.715610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.613 [2024-11-27 07:21:58.715620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.613 [2024-11-27 07:21:58.715627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.613 [2024-11-27 07:21:58.715636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.613 [2024-11-27 07:21:58.715644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.613 [2024-11-27 07:21:58.715654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.613 [2024-11-27 07:21:58.715661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.613 [2024-11-27 07:21:58.715670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27b16d0 is same with the state(6) to be set 00:26:47.613 [2024-11-27 07:21:58.716964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.613 [2024-11-27 07:21:58.716979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.613 [2024-11-27 07:21:58.716992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.613 [2024-11-27 07:21:58.717002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.613 [2024-11-27 07:21:58.717014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.613 [2024-11-27 07:21:58.717022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.613 [2024-11-27 07:21:58.717034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.613 [2024-11-27 07:21:58.717044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.613 [2024-11-27 07:21:58.717054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.613 [2024-11-27 07:21:58.717062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.613 [2024-11-27 07:21:58.717071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.613 [2024-11-27 07:21:58.717081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.613 [2024-11-27 07:21:58.717091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.613 [2024-11-27 07:21:58.717099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.613 [2024-11-27 07:21:58.717108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.613 [2024-11-27 07:21:58.717115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.613 [2024-11-27 07:21:58.717125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.613 [2024-11-27 07:21:58.717132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.613 [2024-11-27 07:21:58.717142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.613 [2024-11-27 07:21:58.717149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.613 [2024-11-27 07:21:58.717162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.613 [2024-11-27 07:21:58.717170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.613 [2024-11-27 07:21:58.717180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.613 [2024-11-27 07:21:58.717187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.613 [2024-11-27 07:21:58.717196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.613 [2024-11-27 07:21:58.717204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.613 [2024-11-27 07:21:58.717214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.613 [2024-11-27 07:21:58.717221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.613 [2024-11-27 07:21:58.717231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.613 [2024-11-27 07:21:58.717239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.613 [2024-11-27 07:21:58.717248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.613 [2024-11-27 07:21:58.717255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.613 [2024-11-27 07:21:58.717265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.613 [2024-11-27 07:21:58.717272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.613 [2024-11-27 07:21:58.717282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.613 [2024-11-27 07:21:58.717290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.613 [2024-11-27 07:21:58.717299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.613 [2024-11-27 07:21:58.717308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.613 [2024-11-27 07:21:58.717317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.614 [2024-11-27 07:21:58.717325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.614 [2024-11-27 07:21:58.717334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.614 [2024-11-27 07:21:58.717342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.614 [2024-11-27 07:21:58.717352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.614 [2024-11-27 07:21:58.717359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.614 [2024-11-27 07:21:58.717368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.614 [2024-11-27 07:21:58.717376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.614 [2024-11-27 07:21:58.717385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.614 [2024-11-27 07:21:58.717393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.614 [2024-11-27 07:21:58.717403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.614 [2024-11-27 07:21:58.717410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.614 [2024-11-27 07:21:58.717420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.614 [2024-11-27 07:21:58.717427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.614 [2024-11-27 07:21:58.717437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.614 [2024-11-27 07:21:58.717444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.614 [2024-11-27 07:21:58.717454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.614 [2024-11-27 07:21:58.717462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.614 [2024-11-27 07:21:58.717471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.614 [2024-11-27 07:21:58.717479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.614 [2024-11-27 07:21:58.717488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.614 [2024-11-27 07:21:58.717495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.614 [2024-11-27 07:21:58.717504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.614 [2024-11-27 07:21:58.717512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.614 [2024-11-27 07:21:58.717523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.614 [2024-11-27 07:21:58.717530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.614 [2024-11-27 07:21:58.717540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.614 [2024-11-27 07:21:58.717547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.614 [2024-11-27 07:21:58.717557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.614 [2024-11-27 07:21:58.717564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.614 [2024-11-27 07:21:58.717574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.614 [2024-11-27 07:21:58.717581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.614 [2024-11-27 07:21:58.717590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.614 [2024-11-27 07:21:58.717598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.614 [2024-11-27 07:21:58.717607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.614 [2024-11-27 07:21:58.717615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.614 [2024-11-27 07:21:58.717624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.614 [2024-11-27 07:21:58.717632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.614 [2024-11-27 07:21:58.717641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.614 [2024-11-27 07:21:58.717649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.614 [2024-11-27 07:21:58.717659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.614 [2024-11-27 07:21:58.717666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.614 [2024-11-27 07:21:58.717676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.614 [2024-11-27 07:21:58.717684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.614 [2024-11-27 07:21:58.717693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.614 [2024-11-27 07:21:58.717700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.614 [2024-11-27 07:21:58.717710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.614 [2024-11-27 07:21:58.717717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.614 [2024-11-27 07:21:58.717727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.614 [2024-11-27 07:21:58.717737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.614 [2024-11-27 07:21:58.717746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.614 [2024-11-27 07:21:58.717754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.614 [2024-11-27 07:21:58.717763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.614 [2024-11-27 07:21:58.717771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.614 [2024-11-27 07:21:58.717781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.614 [2024-11-27 07:21:58.717788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.614 [2024-11-27 07:21:58.717798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.614 [2024-11-27 07:21:58.717805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.614 [2024-11-27 07:21:58.717814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.614 [2024-11-27 07:21:58.717822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.614 [2024-11-27 07:21:58.717831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.614 [2024-11-27 07:21:58.717839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.615 [2024-11-27 07:21:58.717849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.615 [2024-11-27 07:21:58.717856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.615 [2024-11-27 07:21:58.717866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.615 [2024-11-27 07:21:58.717873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.615 [2024-11-27 07:21:58.717882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.615 [2024-11-27 07:21:58.717890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.615 [2024-11-27 07:21:58.717900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.615 [2024-11-27 07:21:58.717908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.615 [2024-11-27 07:21:58.717917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.615 [2024-11-27 07:21:58.717924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.615 [2024-11-27 07:21:58.717934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.615 [2024-11-27 07:21:58.717941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.615 [2024-11-27 07:21:58.717952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.615 [2024-11-27 07:21:58.717960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.615 [2024-11-27 07:21:58.717970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.615 [2024-11-27 07:21:58.717977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.615 [2024-11-27 07:21:58.717986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.615 [2024-11-27 07:21:58.717993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.615 [2024-11-27 07:21:58.718003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.615 [2024-11-27 07:21:58.718011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.615 [2024-11-27 07:21:58.718020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.615 [2024-11-27 07:21:58.718028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.615 [2024-11-27 07:21:58.718037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.615 [2024-11-27 07:21:58.718044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.615 [2024-11-27 07:21:58.718054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.615 [2024-11-27 07:21:58.718061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.615 [2024-11-27 07:21:58.718071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.615 [2024-11-27 07:21:58.718079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.615 [2024-11-27 07:21:58.718088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27bf9a0 is same with the state(6) to be set 00:26:47.615 [2024-11-27 07:21:58.719362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.615 [2024-11-27 07:21:58.719374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.615 [2024-11-27 07:21:58.719387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.615 [2024-11-27 07:21:58.719397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.615 [2024-11-27 07:21:58.719408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.615 [2024-11-27 07:21:58.719418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.615 [2024-11-27 07:21:58.719429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.615 [2024-11-27 07:21:58.719439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.615 [2024-11-27 07:21:58.719453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.615 [2024-11-27 07:21:58.719463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.615 [2024-11-27 07:21:58.719474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.615 [2024-11-27 07:21:58.719484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.615 [2024-11-27 07:21:58.719495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.615 [2024-11-27 07:21:58.719502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.615 [2024-11-27 07:21:58.719512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.615 [2024-11-27 07:21:58.719520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.615 [2024-11-27 07:21:58.719530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.615 [2024-11-27 07:21:58.719538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.615 [2024-11-27 07:21:58.719547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.615 [2024-11-27 07:21:58.719555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.615 [2024-11-27 07:21:58.719564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.615 [2024-11-27 07:21:58.719572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.615 [2024-11-27 07:21:58.719582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.615 [2024-11-27 07:21:58.719590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.615 [2024-11-27 07:21:58.719599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.615 [2024-11-27 07:21:58.719607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.615 [2024-11-27 07:21:58.719617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.615 [2024-11-27 07:21:58.719624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.615 [2024-11-27 07:21:58.719634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.615 [2024-11-27 07:21:58.719641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.616 [2024-11-27 07:21:58.719651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.616 [2024-11-27 07:21:58.719658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.616 [2024-11-27 07:21:58.719668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.616 [2024-11-27 07:21:58.719677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.616 [2024-11-27 07:21:58.719686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.616 [2024-11-27 07:21:58.719694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.616 [2024-11-27 07:21:58.719704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.616 [2024-11-27 07:21:58.719711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.616 [2024-11-27 07:21:58.719721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.616 [2024-11-27 07:21:58.719729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.616 [2024-11-27 07:21:58.719738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.616 [2024-11-27 07:21:58.719746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.616 [2024-11-27 07:21:58.719756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.616 [2024-11-27 07:21:58.719763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.616 [2024-11-27 07:21:58.719772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.616 [2024-11-27 07:21:58.719780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.616 [2024-11-27 07:21:58.719790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.616 [2024-11-27 07:21:58.719797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.616 [2024-11-27 07:21:58.719807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.616 [2024-11-27 07:21:58.719815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.616 [2024-11-27 07:21:58.719825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.616 [2024-11-27 07:21:58.719832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.616 [2024-11-27 07:21:58.719841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.616 [2024-11-27 07:21:58.719849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.616 [2024-11-27 07:21:58.719859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.616 [2024-11-27 07:21:58.719866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.616 [2024-11-27 07:21:58.719875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.616 [2024-11-27 07:21:58.719883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.616 [2024-11-27 07:21:58.719894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.616 [2024-11-27 07:21:58.719902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.616 [2024-11-27 07:21:58.719912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.616 [2024-11-27 07:21:58.719919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.616 [2024-11-27 07:21:58.719928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.616 [2024-11-27 07:21:58.719936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.616 [2024-11-27 07:21:58.719946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.616 [2024-11-27 07:21:58.719954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.616 [2024-11-27 07:21:58.719963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.616 [2024-11-27 07:21:58.719970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.616 [2024-11-27 07:21:58.719980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.616 [2024-11-27 07:21:58.719988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.616 [2024-11-27 07:21:58.719998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.616 [2024-11-27 07:21:58.720006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.616 [2024-11-27 07:21:58.720015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.616 [2024-11-27 07:21:58.720023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.616 [2024-11-27 07:21:58.720033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.616 [2024-11-27 07:21:58.720040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.616 [2024-11-27 07:21:58.720050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.616 [2024-11-27 07:21:58.720057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.616 [2024-11-27 07:21:58.720067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.616 [2024-11-27 07:21:58.720075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.616 [2024-11-27 07:21:58.720084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.616 [2024-11-27 07:21:58.720092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.616 [2024-11-27 07:21:58.720101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.616 [2024-11-27 07:21:58.720110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.616 [2024-11-27 07:21:58.720120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.616 [2024-11-27 07:21:58.720128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.616 [2024-11-27 07:21:58.720137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.616 [2024-11-27 07:21:58.720145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.616 [2024-11-27 07:21:58.720154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.616 [2024-11-27 07:21:58.720166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.616 [2024-11-27 07:21:58.720176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.617 [2024-11-27 07:21:58.720184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 07:21:58.720193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.617 [2024-11-27 07:21:58.720200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 07:21:58.720210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.617 [2024-11-27 07:21:58.720217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 07:21:58.720227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.617 [2024-11-27 07:21:58.720235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 07:21:58.720244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.617 [2024-11-27 07:21:58.720251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 07:21:58.720261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.617 [2024-11-27 07:21:58.720268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 07:21:58.720279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.617 [2024-11-27 07:21:58.720286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 07:21:58.720296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.617 [2024-11-27 07:21:58.720303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 07:21:58.720313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.617 [2024-11-27 07:21:58.720321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 07:21:58.720332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.617 [2024-11-27 07:21:58.720339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 07:21:58.720349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.617 [2024-11-27 07:21:58.720358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 07:21:58.720367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.617 [2024-11-27 07:21:58.720375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 07:21:58.720385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.617 [2024-11-27 07:21:58.720392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 07:21:58.720402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.617 [2024-11-27 07:21:58.720410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 07:21:58.720420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.617 [2024-11-27 07:21:58.720427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 07:21:58.720437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.617 [2024-11-27 07:21:58.720444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 07:21:58.720454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.617 [2024-11-27 07:21:58.720462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 07:21:58.720471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.617 [2024-11-27 07:21:58.720479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 07:21:58.720488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.617 [2024-11-27 07:21:58.720496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 07:21:58.720505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27c0c60 is same with the state(6) to be set 00:26:47.617 [2024-11-27 07:21:58.722079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.617 [2024-11-27 07:21:58.722095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 07:21:58.722107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.617 [2024-11-27 07:21:58.722114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 07:21:58.722128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.617 [2024-11-27 07:21:58.722135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 07:21:58.722144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.617 [2024-11-27 07:21:58.722151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 07:21:58.722166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.617 [2024-11-27 07:21:58.722174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 07:21:58.722183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.617 [2024-11-27 07:21:58.722191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 07:21:58.722201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.617 [2024-11-27 07:21:58.722208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 07:21:58.722218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.617 [2024-11-27 07:21:58.722225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 07:21:58.722235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.617 [2024-11-27 07:21:58.722242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 07:21:58.722251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.617 [2024-11-27 07:21:58.722260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 07:21:58.722269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.617 [2024-11-27 07:21:58.722277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 07:21:58.722286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.617 [2024-11-27 07:21:58.722293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 07:21:58.722303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.617 [2024-11-27 07:21:58.722311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 07:21:58.722320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.617 [2024-11-27 07:21:58.722328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 07:21:58.722337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.617 [2024-11-27 07:21:58.722346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.617 [2024-11-27 07:21:58.722356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.618 [2024-11-27 07:21:58.722363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.618 [2024-11-27 07:21:58.722373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.618 [2024-11-27 07:21:58.722380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.618 [2024-11-27 07:21:58.722390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.618 [2024-11-27 07:21:58.722397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.618 [2024-11-27 07:21:58.722407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.618 [2024-11-27 07:21:58.722414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.618 [2024-11-27 07:21:58.722424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.618 [2024-11-27 07:21:58.722431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.618 [2024-11-27 07:21:58.722441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.618 [2024-11-27 07:21:58.722448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.618 [2024-11-27 07:21:58.722458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.618 [2024-11-27 07:21:58.722465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.618 [2024-11-27 07:21:58.722475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.618 [2024-11-27 07:21:58.722483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.618 [2024-11-27 07:21:58.722492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.618 [2024-11-27 07:21:58.722499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.618 [2024-11-27 07:21:58.722508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.618 [2024-11-27 07:21:58.722516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.618 [2024-11-27 07:21:58.722526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.618 [2024-11-27 07:21:58.722533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.618 [2024-11-27 07:21:58.722542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.618 [2024-11-27 07:21:58.722550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.618 [2024-11-27 07:21:58.722562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.618 [2024-11-27 07:21:58.722569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.618 [2024-11-27 07:21:58.722579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.618 [2024-11-27 07:21:58.722586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.618 [2024-11-27 07:21:58.722595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.618 [2024-11-27 07:21:58.722603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.618 [2024-11-27 07:21:58.722613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.618 [2024-11-27 07:21:58.722620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.618 [2024-11-27 07:21:58.722630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.618 [2024-11-27 07:21:58.722637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.618 [2024-11-27 07:21:58.722647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.618 [2024-11-27 07:21:58.722654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.618 [2024-11-27 07:21:58.722663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.618 [2024-11-27 07:21:58.722671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.618 [2024-11-27 07:21:58.722681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.618 [2024-11-27 07:21:58.722688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.618 [2024-11-27 07:21:58.722697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.618 [2024-11-27 07:21:58.722705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.618 [2024-11-27 07:21:58.722714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.618 [2024-11-27 07:21:58.722722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.618 [2024-11-27 07:21:58.722731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.618 [2024-11-27 07:21:58.722739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.618 [2024-11-27 07:21:58.722749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.618 [2024-11-27 07:21:58.722756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.618 [2024-11-27 07:21:58.722766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.618 [2024-11-27 07:21:58.722774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.618 [2024-11-27 07:21:58.722784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.618 [2024-11-27 07:21:58.722792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.618 [2024-11-27 07:21:58.722802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.618 [2024-11-27 07:21:58.722809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.618 [2024-11-27 07:21:58.722819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.618 [2024-11-27 07:21:58.722826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.618 [2024-11-27 07:21:58.722836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.618 [2024-11-27 07:21:58.722843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.618 [2024-11-27 07:21:58.722853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.618 [2024-11-27 07:21:58.722860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.618 [2024-11-27 07:21:58.722870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.618 [2024-11-27 07:21:58.722877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.618 [2024-11-27 07:21:58.722887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.618 [2024-11-27 07:21:58.722895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.618 [2024-11-27 07:21:58.722904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.618 [2024-11-27 07:21:58.722912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.618 [2024-11-27 07:21:58.722921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.618 [2024-11-27 07:21:58.722929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.618 [2024-11-27 07:21:58.722938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.619 [2024-11-27 07:21:58.722946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.619 [2024-11-27 07:21:58.722955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.619 [2024-11-27 07:21:58.722964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.619 [2024-11-27 07:21:58.722973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.619 [2024-11-27 07:21:58.722981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.619 [2024-11-27 07:21:58.722996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.619 [2024-11-27 07:21:58.723004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.619 [2024-11-27 07:21:58.723013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.619 [2024-11-27 07:21:58.723021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.619 [2024-11-27 07:21:58.723030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.619 [2024-11-27 07:21:58.723038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.619 [2024-11-27 07:21:58.723047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.619 [2024-11-27 07:21:58.723054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.619 [2024-11-27 07:21:58.723064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.619 [2024-11-27 07:21:58.723071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.619 [2024-11-27 07:21:58.723080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.619 [2024-11-27 07:21:58.723088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.619 [2024-11-27 07:21:58.723098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.619 [2024-11-27 07:21:58.723106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.619 [2024-11-27 07:21:58.723115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.619 [2024-11-27 07:21:58.723123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.619 [2024-11-27 07:21:58.723133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.619 [2024-11-27 07:21:58.723140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.619 [2024-11-27 07:21:58.723150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.619 [2024-11-27 07:21:58.723157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.619 [2024-11-27 07:21:58.723172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.619 [2024-11-27 07:21:58.723180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.619 [2024-11-27 07:21:58.723189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.619 [2024-11-27 07:21:58.723197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.619 [2024-11-27 07:21:58.723205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2666fd0 is same with the state(6) to be set 00:26:47.619 [2024-11-27 07:21:58.724461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:26:47.619 [2024-11-27 07:21:58.724481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:26:47.619 [2024-11-27 07:21:58.724493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:26:47.619 [2024-11-27 07:21:58.724506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:26:47.619 [2024-11-27 07:21:58.724550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3610 (9): Bad file descriptor 00:26:47.619 [2024-11-27 07:21:58.724561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:26:47.619 [2024-11-27 07:21:58.724568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:26:47.619 [2024-11-27 07:21:58.724577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:26:47.619 [2024-11-27 07:21:58.724586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:26:47.619 [2024-11-27 07:21:58.724651] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:26:47.619 [2024-11-27 07:21:58.724957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.619 [2024-11-27 07:21:58.724974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9fc0 with addr=10.0.0.2, port=4420 00:26:47.619 [2024-11-27 07:21:58.724982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b9fc0 is same with the state(6) to be set 00:26:47.619 [2024-11-27 07:21:58.725189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.619 [2024-11-27 07:21:58.725201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27e7180 with addr=10.0.0.2, port=4420 00:26:47.619 [2024-11-27 07:21:58.725209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27e7180 is same with the state(6) to be set 00:26:47.619 [2024-11-27 07:21:58.725510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.619 [2024-11-27 07:21:58.725520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27dcc90 with addr=10.0.0.2, port=4420 00:26:47.619 [2024-11-27 07:21:58.725527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27dcc90 is same with the state(6) to be set 00:26:47.619 [2024-11-27 07:21:58.725859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.619 [2024-11-27 07:21:58.725870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2825d50 with addr=10.0.0.2, port=4420 00:26:47.619 [2024-11-27 07:21:58.725877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2825d50 is same with the state(6) to be set 00:26:47.619 [2024-11-27 07:21:58.725885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:26:47.619 [2024-11-27 07:21:58.725892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:26:47.619 [2024-11-27 07:21:58.725900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:26:47.619 [2024-11-27 07:21:58.725907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:26:47.619 [2024-11-27 07:21:58.726751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.619 [2024-11-27 07:21:58.726764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.619 [2024-11-27 07:21:58.726776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.619 [2024-11-27 07:21:58.726787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.619 [2024-11-27 07:21:58.726797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.619 [2024-11-27 07:21:58.726804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.619 [2024-11-27 07:21:58.726814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.619 [2024-11-27 07:21:58.726822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.619 [2024-11-27 07:21:58.726832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.619 [2024-11-27 07:21:58.726839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.619 [2024-11-27 07:21:58.726849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.619 [2024-11-27 07:21:58.726856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.619 [2024-11-27 07:21:58.726866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.619 [2024-11-27 07:21:58.726873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.619 [2024-11-27 07:21:58.726883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.619 [2024-11-27 07:21:58.726890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.619 [2024-11-27 07:21:58.726899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.619 [2024-11-27 07:21:58.726907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.619 [2024-11-27 07:21:58.726917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.619 [2024-11-27 07:21:58.726925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.619 [2024-11-27 07:21:58.726934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.619 [2024-11-27 07:21:58.726942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.619 [2024-11-27 07:21:58.726952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.619 [2024-11-27 07:21:58.726959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.619 [2024-11-27 07:21:58.726969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.619 [2024-11-27 07:21:58.726976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.726985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.726993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.727012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.727030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.727047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.727064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.727081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.727098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.727115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.727133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.727150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.727171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.727188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.727205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.727223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.727241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.727258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.727275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.727292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.727309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.727327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.727344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.727361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.727378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.727395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.727412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.727429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.727448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.727465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.727482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.727500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.727517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.727535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.727552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.727569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.727587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.727604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.727621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.727638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.620 [2024-11-27 07:21:58.727655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.620 [2024-11-27 07:21:58.727665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.621 [2024-11-27 07:21:58.727673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.621 [2024-11-27 07:21:58.727683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.621 [2024-11-27 07:21:58.727692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.621 [2024-11-27 07:21:58.727701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.621 [2024-11-27 07:21:58.727708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.621 [2024-11-27 07:21:58.727718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.621 [2024-11-27 07:21:58.727725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.621 [2024-11-27 07:21:58.727735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.621 [2024-11-27 07:21:58.727742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.621 [2024-11-27 07:21:58.727752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.621 [2024-11-27 07:21:58.727759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.621 [2024-11-27 07:21:58.727769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.621 [2024-11-27 07:21:58.727777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.621 [2024-11-27 07:21:58.727787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.621 [2024-11-27 07:21:58.727794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.621 [2024-11-27 07:21:58.727803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.621 [2024-11-27 07:21:58.727811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.621 [2024-11-27 07:21:58.727821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.621 [2024-11-27 07:21:58.727828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.621 [2024-11-27 07:21:58.727837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.621 [2024-11-27 07:21:58.727845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.621 [2024-11-27 07:21:58.727854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.621 [2024-11-27 07:21:58.727862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.621 [2024-11-27 07:21:58.727870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2665d90 is same with the state(6) to be set 00:26:47.621 [2024-11-27 07:21:58.729651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:26:47.621 [2024-11-27 07:21:58.729677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:47.621 [2024-11-27 07:21:58.729686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:26:47.621 [2024-11-27 07:21:58.729695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:26:47.621 task offset: 28928 on job bdev=Nvme2n1 fails 00:26:47.621 00:26:47.621 Latency(us) 00:26:47.621 [2024-11-27T06:21:58.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:47.621 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:47.621 Job: Nvme1n1 ended in about 0.97 seconds with error 00:26:47.621 Verification LBA range: start 0x0 length 0x400 00:26:47.621 Nvme1n1 : 0.97 198.08 12.38 66.03 0.00 239520.21 17148.59 255153.49 00:26:47.621 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:47.621 Job: Nvme2n1 ended in about 0.97 seconds with error 00:26:47.621 Verification LBA range: start 0x0 length 0x400 00:26:47.621 Nvme2n1 : 0.97 198.72 12.42 66.24 0.00 233869.87 19114.67 244667.73 00:26:47.621 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:47.621 Job: Nvme3n1 ended in about 0.98 seconds with error 00:26:47.621 Verification LBA range: start 0x0 length 0x400 00:26:47.621 Nvme3n1 : 0.98 195.49 12.22 65.16 0.00 233050.03 18459.31 265639.25 00:26:47.621 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:47.621 Job: Nvme4n1 ended in about 0.97 seconds with error 00:26:47.621 Verification LBA range: start 0x0 length 0x400 00:26:47.621 Nvme4n1 : 0.97 201.94 12.62 65.94 0.00 221802.81 19223.89 242920.11 00:26:47.621 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:47.621 Job: Nvme5n1 ended in about 0.98 seconds with error 00:26:47.621 Verification LBA range: start 0x0 length 0x400 00:26:47.621 Nvme5n1 : 0.98 130.01 8.13 65.00 0.00 298666.10 16711.68 255153.49 00:26:47.621 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:47.621 Job: Nvme6n1 ended in about 0.99 seconds with error 00:26:47.621 Verification LBA range: start 0x0 length 0x400 00:26:47.621 Nvme6n1 : 0.99 129.69 8.11 64.85 0.00 292997.12 17257.81 283115.52 00:26:47.621 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:47.621 Job: Nvme7n1 ended in about 0.98 seconds with error 00:26:47.621 Verification LBA range: start 0x0 length 0x400 00:26:47.621 Nvme7n1 : 0.98 196.26 12.27 65.42 0.00 212687.79 19551.57 230686.72 00:26:47.621 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:47.621 Job: Nvme8n1 ended in about 0.97 seconds with error 00:26:47.621 Verification LBA range: start 0x0 length 0x400 00:26:47.621 Nvme8n1 : 0.97 201.29 12.58 65.73 0.00 203496.74 5051.73 223696.21 00:26:47.621 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:47.621 Job: Nvme9n1 ended in about 0.99 seconds with error 00:26:47.621 Verification LBA range: start 0x0 length 0x400 00:26:47.621 Nvme9n1 : 0.99 132.75 8.30 64.37 0.00 270465.57 20206.93 274377.39 00:26:47.621 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:47.621 Job: Nvme10n1 ended in about 0.99 seconds with error 00:26:47.621 Verification LBA range: start 0x0 length 0x400 00:26:47.621 Nvme10n1 : 0.99 129.34 8.08 64.67 0.00 268049.07 23156.05 279620.27 00:26:47.621 [2024-11-27T06:21:58.826Z] =================================================================================================================== 00:26:47.621 [2024-11-27T06:21:58.826Z] Total : 1713.57 107.10 653.40 0.00 243502.00 5051.73 283115.52 00:26:47.621 [2024-11-27 07:21:58.755243] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:47.621 [2024-11-27 07:21:58.755299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:26:47.621 [2024-11-27 07:21:58.755373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b9fc0 (9): Bad file descriptor 00:26:47.621 [2024-11-27 07:21:58.755391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27e7180 (9): Bad file descriptor 00:26:47.621 [2024-11-27 07:21:58.755401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27dcc90 (9): Bad file descriptor 00:26:47.621 [2024-11-27 07:21:58.755412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2825d50 (9): Bad file descriptor 00:26:47.621 [2024-11-27 07:21:58.755828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.621 [2024-11-27 07:21:58.755850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27e7980 with addr=10.0.0.2, port=4420 00:26:47.621 [2024-11-27 07:21:58.755861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27e7980 is same with the state(6) to be set 00:26:47.621 [2024-11-27 07:21:58.756183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.621 [2024-11-27 07:21:58.756195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bbcc0 with addr=10.0.0.2, port=4420 00:26:47.621 [2024-11-27 07:21:58.756204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbcc0 is same with the state(6) to be set 00:26:47.621 [2024-11-27 07:21:58.756296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.621 [2024-11-27 07:21:58.756307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bb850 with addr=10.0.0.2, port=4420 00:26:47.621 [2024-11-27 07:21:58.756315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bb850 is same with the state(6) to be set 00:26:47.621 [2024-11-27 07:21:58.756687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.621 [2024-11-27 07:21:58.756697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27dc6a0 with addr=10.0.0.2, port=4420 00:26:47.621 [2024-11-27 07:21:58.756704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27dc6a0 is same with the state(6) to be set 00:26:47.621 [2024-11-27 07:21:58.756903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.621 [2024-11-27 07:21:58.756914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2833bb0 with addr=10.0.0.2, port=4420 00:26:47.621 [2024-11-27 07:21:58.756921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2833bb0 is same with the state(6) to be set 00:26:47.621 [2024-11-27 07:21:58.756929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:26:47.621 [2024-11-27 07:21:58.756936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:26:47.621 [2024-11-27 07:21:58.756946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:26:47.621 [2024-11-27 07:21:58.756956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:26:47.621 [2024-11-27 07:21:58.756965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:26:47.622 [2024-11-27 07:21:58.756973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:26:47.622 [2024-11-27 07:21:58.756980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:26:47.622 [2024-11-27 07:21:58.756986] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:26:47.622 [2024-11-27 07:21:58.756994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:26:47.622 [2024-11-27 07:21:58.757000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:26:47.622 [2024-11-27 07:21:58.757011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:26:47.622 [2024-11-27 07:21:58.757019] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:26:47.622 [2024-11-27 07:21:58.757028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:26:47.622 [2024-11-27 07:21:58.757035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:26:47.622 [2024-11-27 07:21:58.757042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:26:47.622 [2024-11-27 07:21:58.757049] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:26:47.622 [2024-11-27 07:21:58.757459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27e7980 (9): Bad file descriptor 00:26:47.622 [2024-11-27 07:21:58.757476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbcc0 (9): Bad file descriptor 00:26:47.622 [2024-11-27 07:21:58.757486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bb850 (9): Bad file descriptor 00:26:47.622 [2024-11-27 07:21:58.757497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27dc6a0 (9): Bad file descriptor 00:26:47.622 [2024-11-27 07:21:58.757506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2833bb0 (9): Bad file descriptor 00:26:47.622 [2024-11-27 07:21:58.757547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:26:47.622 [2024-11-27 07:21:58.757558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:26:47.622 [2024-11-27 07:21:58.757567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:26:47.622 [2024-11-27 07:21:58.757576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:26:47.622 [2024-11-27 07:21:58.757585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:26:47.622 [2024-11-27 07:21:58.757623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:26:47.622 [2024-11-27 07:21:58.757630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:26:47.622 [2024-11-27 07:21:58.757637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:26:47.622 [2024-11-27 07:21:58.757644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:26:47.622 [2024-11-27 07:21:58.757652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:47.622 [2024-11-27 07:21:58.757659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:47.622 [2024-11-27 07:21:58.757667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:47.622 [2024-11-27 07:21:58.757674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:47.622 [2024-11-27 07:21:58.757681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:26:47.622 [2024-11-27 07:21:58.757687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:26:47.622 [2024-11-27 07:21:58.757694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:26:47.622 [2024-11-27 07:21:58.757700] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:26:47.622 [2024-11-27 07:21:58.757709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:26:47.622 [2024-11-27 07:21:58.757719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:26:47.622 [2024-11-27 07:21:58.757727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:26:47.622 [2024-11-27 07:21:58.757735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:26:47.622 [2024-11-27 07:21:58.757742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:26:47.622 [2024-11-27 07:21:58.757748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:26:47.622 [2024-11-27 07:21:58.757755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:26:47.622 [2024-11-27 07:21:58.757762] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:26:47.622 [2024-11-27 07:21:58.757993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.622 [2024-11-27 07:21:58.758007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3610 with addr=10.0.0.2, port=4420 00:26:47.622 [2024-11-27 07:21:58.758016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3610 is same with the state(6) to be set 00:26:47.622 [2024-11-27 07:21:58.758302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.622 [2024-11-27 07:21:58.758313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2825d50 with addr=10.0.0.2, port=4420 00:26:47.622 [2024-11-27 07:21:58.758321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2825d50 is same with the state(6) to be set 00:26:47.622 [2024-11-27 07:21:58.758609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.622 [2024-11-27 07:21:58.758620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27dcc90 with addr=10.0.0.2, port=4420 00:26:47.622 [2024-11-27 07:21:58.758627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27dcc90 is same with the state(6) to be set 00:26:47.622 [2024-11-27 07:21:58.758946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.622 [2024-11-27 07:21:58.758956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27e7180 with addr=10.0.0.2, port=4420 00:26:47.622 [2024-11-27 07:21:58.758963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27e7180 is same with the state(6) to be set 00:26:47.622 [2024-11-27 07:21:58.759271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.622 [2024-11-27 07:21:58.759281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9fc0 with addr=10.0.0.2, port=4420 00:26:47.622 [2024-11-27 07:21:58.759289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b9fc0 is same with the state(6) to be set 00:26:47.622 [2024-11-27 07:21:58.759317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3610 (9): Bad file descriptor 00:26:47.622 [2024-11-27 07:21:58.759327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2825d50 (9): Bad file descriptor 00:26:47.622 [2024-11-27 07:21:58.759337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27dcc90 (9): Bad file descriptor 00:26:47.622 [2024-11-27 07:21:58.759346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27e7180 (9): Bad file descriptor 00:26:47.622 [2024-11-27 07:21:58.759355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b9fc0 (9): Bad file descriptor 00:26:47.622 [2024-11-27 07:21:58.759382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:26:47.622 [2024-11-27 07:21:58.759389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:26:47.622 [2024-11-27 07:21:58.759400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:26:47.622 [2024-11-27 07:21:58.759406] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:26:47.622 [2024-11-27 07:21:58.759413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:26:47.622 [2024-11-27 07:21:58.759420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:26:47.622 [2024-11-27 07:21:58.759428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:26:47.622 [2024-11-27 07:21:58.759434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:26:47.622 [2024-11-27 07:21:58.759441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:26:47.622 [2024-11-27 07:21:58.759448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:26:47.622 [2024-11-27 07:21:58.759455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:26:47.622 [2024-11-27 07:21:58.759461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:26:47.622 [2024-11-27 07:21:58.759468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:26:47.622 [2024-11-27 07:21:58.759475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:26:47.622 [2024-11-27 07:21:58.759482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:26:47.622 [2024-11-27 07:21:58.759489] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:26:47.622 [2024-11-27 07:21:58.759496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:26:47.622 [2024-11-27 07:21:58.759502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:26:47.622 [2024-11-27 07:21:58.759509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:26:47.622 [2024-11-27 07:21:58.759516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:26:47.883 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:26:48.830 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2463570 00:26:48.830 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:26:48.830 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2463570 00:26:48.830 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:26:48.830 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:48.830 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:26:48.830 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:48.830 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2463570 00:26:48.830 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:26:48.830 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:48.830 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:26:48.830 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:26:48.830 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:26:48.830 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:48.830 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:26:48.830 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:48.830 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:48.830 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:48.830 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:48.830 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:48.830 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:26:48.830 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:48.830 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:26:48.830 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:48.830 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:48.830 rmmod nvme_tcp 00:26:48.830 rmmod nvme_fabrics 00:26:48.830 rmmod nvme_keyring 00:26:48.830 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:48.830 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:26:48.830 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:26:48.830 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2463333 ']' 00:26:48.830 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2463333 00:26:48.830 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2463333 ']' 00:26:48.830 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2463333 00:26:48.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2463333) - No such process 00:26:48.830 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2463333 is not found' 00:26:48.830 Process with pid 2463333 is not found 00:26:48.830 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:48.830 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:48.830 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:48.830 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:26:49.092 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:26:49.092 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:49.092 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:26:49.092 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:49.092 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:49.092 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.092 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:49.092 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:51.148 00:26:51.148 real 0m7.874s 00:26:51.148 user 0m19.282s 00:26:51.148 sys 0m1.315s 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:51.148 ************************************ 00:26:51.148 END TEST nvmf_shutdown_tc3 00:26:51.148 ************************************ 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:51.148 ************************************ 00:26:51.148 START TEST nvmf_shutdown_tc4 00:26:51.148 ************************************ 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:51.148 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:51.149 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:51.149 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:51.149 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:51.149 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:51.149 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:51.466 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:51.466 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:51.466 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:51.466 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:51.466 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:51.466 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:51.466 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:51.466 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:51.466 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:51.466 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:26:51.466 00:26:51.466 --- 10.0.0.2 ping statistics --- 00:26:51.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.466 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:26:51.466 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:51.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:51.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:26:51.466 00:26:51.466 --- 10.0.0.1 ping statistics --- 00:26:51.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.466 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:26:51.466 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:51.466 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:26:51.466 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:51.466 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:51.466 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:51.466 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:51.466 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:51.466 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:51.466 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:51.466 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:51.466 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:51.466 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:51.466 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:51.466 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2464942 00:26:51.466 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2464942 00:26:51.466 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:51.466 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2464942 ']' 00:26:51.466 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:51.466 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:51.466 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:51.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:51.466 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:51.466 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:51.466 [2024-11-27 07:22:02.656750] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:26:51.466 [2024-11-27 07:22:02.656815] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:51.727 [2024-11-27 07:22:02.752679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:51.727 [2024-11-27 07:22:02.786408] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:51.727 [2024-11-27 07:22:02.786450] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:51.727 [2024-11-27 07:22:02.786456] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:51.727 [2024-11-27 07:22:02.786461] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:51.727 [2024-11-27 07:22:02.786465] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:51.727 [2024-11-27 07:22:02.787775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:51.727 [2024-11-27 07:22:02.787928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:51.727 [2024-11-27 07:22:02.788070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.727 [2024-11-27 07:22:02.788071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:52.298 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:52.298 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:26:52.298 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:52.298 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:52.298 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:52.298 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:52.298 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:52.298 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.298 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:52.560 [2024-11-27 07:22:03.503506] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:52.560 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.560 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:52.560 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:52.560 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:52.560 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:52.560 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:52.560 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:52.560 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:52.560 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:52.560 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:52.560 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:52.560 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:52.560 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:52.560 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:52.560 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:52.560 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:52.560 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:52.560 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:52.560 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:52.560 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:52.560 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:52.560 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:52.560 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:52.561 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:52.561 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:52.561 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:52.561 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:52.561 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.561 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:52.561 Malloc1 00:26:52.561 [2024-11-27 07:22:03.615599] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:52.561 Malloc2 00:26:52.561 Malloc3 00:26:52.561 Malloc4 00:26:52.561 Malloc5 00:26:52.821 Malloc6 00:26:52.821 Malloc7 00:26:52.821 Malloc8 00:26:52.821 Malloc9 00:26:52.821 Malloc10 00:26:52.821 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.821 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:52.821 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:52.822 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:52.822 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2465324 00:26:52.822 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:26:52.822 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:26:53.082 [2024-11-27 07:22:04.093749] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:58.400 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:58.400 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2464942 00:26:58.400 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2464942 ']' 00:26:58.400 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2464942 00:26:58.400 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:26:58.400 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:58.400 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2464942 00:26:58.400 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:58.400 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:58.400 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2464942' 00:26:58.400 killing process with pid 2464942 00:26:58.400 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2464942 00:26:58.400 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2464942 00:26:58.400 [2024-11-27 07:22:09.087799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc1fd0 is same with the state(6) to be set 00:26:58.401 [2024-11-27 07:22:09.087852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc1fd0 is same with the state(6) to be set 00:26:58.401 [2024-11-27 07:22:09.087859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc1fd0 is same with the state(6) to be set 00:26:58.401 [2024-11-27 07:22:09.087864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc1fd0 is same with the state(6) to be set 00:26:58.401 [2024-11-27 07:22:09.087869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc1fd0 is same with the state(6) to be set 00:26:58.401 [2024-11-27 07:22:09.087874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc1fd0 is same with the state(6) to be set 00:26:58.401 [2024-11-27 07:22:09.087878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc1fd0 is same with the state(6) to be set 00:26:58.401 [2024-11-27 07:22:09.087883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc1fd0 is same with the state(6) to be set 00:26:58.401 [2024-11-27 07:22:09.088181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc24a0 is same with the state(6) to be set 00:26:58.401 [2024-11-27 07:22:09.088206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc24a0 is same with the state(6) to be set 00:26:58.401 [2024-11-27 07:22:09.088491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2990 is same with the state(6) to be set 00:26:58.401 [2024-11-27 07:22:09.088513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2990 is same with the state(6) to be set 00:26:58.401 [2024-11-27 07:22:09.088519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2990 is same with the state(6) to be set 00:26:58.401 [2024-11-27 07:22:09.088525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2990 is same with the state(6) to be set 00:26:58.401 [2024-11-27 07:22:09.088874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf2b70 is same with the state(6) to be set 00:26:58.401 [2024-11-27 07:22:09.088896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf2b70 is same with the state(6) to be set 00:26:58.401 [2024-11-27 07:22:09.088902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf2b70 is same with the state(6) to be set 00:26:58.401 [2024-11-27 07:22:09.088908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf2b70 is same with the state(6) to be set 00:26:58.401 [2024-11-27 07:22:09.088912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf2b70 is same with the state(6) to be set 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 [2024-11-27 07:22:09.092992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc4430 is same with the state(6) to be set 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 [2024-11-27 07:22:09.093012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc4430 is same with the state(6) to be set 00:26:58.401 [2024-11-27 07:22:09.093017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc4430 is same with the state(6) to be set 00:26:58.401 [2024-11-27 07:22:09.093022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc4430 is same with tWrite completed with error (sct=0, sc=8) 00:26:58.401 he state(6) to be set 00:26:58.401 [2024-11-27 07:22:09.093029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc4430 is same with the state(6) to be set 00:26:58.401 [2024-11-27 07:22:09.093034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc4430 is same with the state(6) to be set 00:26:58.401 [2024-11-27 07:22:09.093039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc4430 is same with the state(6) to be set 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 [2024-11-27 07:22:09.093044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc4430 is same with the state(6) to be set 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 [2024-11-27 07:22:09.093239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc4900 is same with the state(6) to be set 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 [2024-11-27 07:22:09.093257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc4900 is same with the state(6) to be set 00:26:58.401 [2024-11-27 07:22:09.093263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc4900 is same with the state(6) to be set 00:26:58.401 [2024-11-27 07:22:09.093268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc4900 is same with the state(6) to be set 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 [2024-11-27 07:22:09.093272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc4900 is same with the state(6) to be set 00:26:58.401 [2024-11-27 07:22:09.093277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc4900 is same with the state(6) to be set 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 [2024-11-27 07:22:09.093530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc4dd0 is same with the state(6) to be set 00:26:58.401 [2024-11-27 07:22:09.093546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc4dd0 is same with the state(6) to be set 00:26:58.401 [2024-11-27 07:22:09.093551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc4dd0 is same with the state(6) to be set 00:26:58.401 [2024-11-27 07:22:09.093557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc4dd0 is same with the state(6) to be set 00:26:58.401 [2024-11-27 07:22:09.093562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc4dd0 is same with the state(6) to be set 00:26:58.401 [2024-11-27 07:22:09.093540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 [2024-11-27 07:22:09.093749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc3f60 is same with the state(6) to be set 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 [2024-11-27 07:22:09.093768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc3f60 is same with the state(6) to be set 00:26:58.401 [2024-11-27 07:22:09.093774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc3f60 is same with the state(6) to be set 00:26:58.401 [2024-11-27 07:22:09.093779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc3f60 is same with the state(6) to be set 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 [2024-11-27 07:22:09.093784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc3f60 is same with the state(6) to be set 00:26:58.401 [2024-11-27 07:22:09.093792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc3f60 is same with the state(6) to be set 00:26:58.401 [2024-11-27 07:22:09.093797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc3f60 is same with the state(6) to be set 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 [2024-11-27 07:22:09.093802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc3f60 is same with the state(6) to be set 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 [2024-11-27 07:22:09.094402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.401 Write completed with error (sct=0, sc=8) 00:26:58.401 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 [2024-11-27 07:22:09.095330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 [2024-11-27 07:22:09.096740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.402 NVMe io qpair process completion error 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 starting I/O failed: -6 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.402 Write completed with error (sct=0, sc=8) 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 [2024-11-27 07:22:09.097878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.403 starting I/O failed: -6 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 [2024-11-27 07:22:09.098719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 [2024-11-27 07:22:09.099631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.403 starting I/O failed: -6 00:26:58.403 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 [2024-11-27 07:22:09.101245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.404 NVMe io qpair process completion error 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 [2024-11-27 07:22:09.102632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 [2024-11-27 07:22:09.103495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.404 starting I/O failed: -6 00:26:58.404 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 [2024-11-27 07:22:09.104430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 [2024-11-27 07:22:09.106553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:58.405 NVMe io qpair process completion error 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.405 starting I/O failed: -6 00:26:58.405 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 [2024-11-27 07:22:09.107854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 [2024-11-27 07:22:09.108695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 [2024-11-27 07:22:09.109634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.406 Write completed with error (sct=0, sc=8) 00:26:58.406 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 [2024-11-27 07:22:09.111616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:58.407 NVMe io qpair process completion error 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 [2024-11-27 07:22:09.112733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 [2024-11-27 07:22:09.113552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.407 starting I/O failed: -6 00:26:58.407 Write completed with error (sct=0, sc=8) 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 [2024-11-27 07:22:09.114503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 [2024-11-27 07:22:09.116180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:58.408 NVMe io qpair process completion error 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 starting I/O failed: -6 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.408 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 [2024-11-27 07:22:09.117278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 [2024-11-27 07:22:09.118089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 [2024-11-27 07:22:09.119028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.409 starting I/O failed: -6 00:26:58.409 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 [2024-11-27 07:22:09.122261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:58.410 NVMe io qpair process completion error 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 [2024-11-27 07:22:09.123420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 Write completed with error (sct=0, sc=8) 00:26:58.410 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 [2024-11-27 07:22:09.124299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 [2024-11-27 07:22:09.125214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.411 starting I/O failed: -6 00:26:58.411 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 [2024-11-27 07:22:09.127058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.412 NVMe io qpair process completion error 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 [2024-11-27 07:22:09.129297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 [2024-11-27 07:22:09.130300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.412 Write completed with error (sct=0, sc=8) 00:26:58.412 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 [2024-11-27 07:22:09.132756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.413 NVMe io qpair process completion error 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 [2024-11-27 07:22:09.133883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.413 starting I/O failed: -6 00:26:58.413 [2024-11-27 07:22:09.134719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.413 Write completed with error (sct=0, sc=8) 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 [2024-11-27 07:22:09.135646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 [2024-11-27 07:22:09.137082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.414 NVMe io qpair process completion error 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.414 starting I/O failed: -6 00:26:58.414 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 [2024-11-27 07:22:09.138596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 [2024-11-27 07:22:09.139493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.415 starting I/O failed: -6 00:26:58.415 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 Write completed with error (sct=0, sc=8) 00:26:58.416 starting I/O failed: -6 00:26:58.416 [2024-11-27 07:22:09.142329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.416 NVMe io qpair process completion error 00:26:58.416 Initializing NVMe Controllers 00:26:58.416 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:58.416 Controller IO queue size 128, less than required. 00:26:58.416 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:58.416 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:26:58.416 Controller IO queue size 128, less than required. 00:26:58.416 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:58.416 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:26:58.416 Controller IO queue size 128, less than required. 00:26:58.416 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:58.416 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:26:58.416 Controller IO queue size 128, less than required. 00:26:58.416 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:58.416 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:26:58.416 Controller IO queue size 128, less than required. 00:26:58.416 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:58.416 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:26:58.416 Controller IO queue size 128, less than required. 00:26:58.416 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:58.416 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:26:58.416 Controller IO queue size 128, less than required. 00:26:58.416 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:58.416 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:26:58.416 Controller IO queue size 128, less than required. 00:26:58.416 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:58.416 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:26:58.416 Controller IO queue size 128, less than required. 00:26:58.416 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:58.416 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:26:58.416 Controller IO queue size 128, less than required. 00:26:58.416 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:58.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:58.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:26:58.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:26:58.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:26:58.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:26:58.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:26:58.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:26:58.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:26:58.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:26:58.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:26:58.416 Initialization complete. Launching workers. 00:26:58.416 ======================================================== 00:26:58.416 Latency(us) 00:26:58.416 Device Information : IOPS MiB/s Average min max 00:26:58.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1889.31 81.18 67772.71 487.16 124809.29 00:26:58.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1908.30 82.00 67129.14 675.74 132528.10 00:26:58.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1887.62 81.11 67884.03 636.50 119346.87 00:26:58.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1878.55 80.72 67522.91 638.67 119616.32 00:26:58.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1853.24 79.63 68463.02 662.50 125065.84 00:26:58.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1870.32 80.37 67862.64 863.96 124387.11 00:26:58.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1844.17 79.24 68856.81 661.23 119506.12 00:26:58.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1908.51 82.01 66559.70 853.92 126662.52 00:26:58.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1871.59 80.42 67894.32 817.16 125330.55 00:26:58.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1852.39 79.60 68642.16 912.56 127792.46 00:26:58.416 ======================================================== 00:26:58.416 Total : 18764.00 806.27 67851.72 487.16 132528.10 00:26:58.416 00:26:58.416 [2024-11-27 07:22:09.147419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0d720 is same with the state(6) to be set 00:26:58.416 [2024-11-27 07:22:09.147465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ca70 is same with the state(6) to be set 00:26:58.416 [2024-11-27 07:22:09.147498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0b890 is same with the state(6) to be set 00:26:58.417 [2024-11-27 07:22:09.147527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0d900 is same with the state(6) to be set 00:26:58.417 [2024-11-27 07:22:09.147556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0c410 is same with the state(6) to be set 00:26:58.417 [2024-11-27 07:22:09.147586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0dae0 is same with the state(6) to be set 00:26:58.417 [2024-11-27 07:22:09.147615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0bef0 is same with the state(6) to be set 00:26:58.417 [2024-11-27 07:22:09.147644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0b560 is same with the state(6) to be set 00:26:58.417 [2024-11-27 07:22:09.147673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0bbc0 is same with the state(6) to be set 00:26:58.417 [2024-11-27 07:22:09.147701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0c740 is same with the state(6) to be set 00:26:58.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:26:58.417 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2465324 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2465324 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2465324 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:59.361 rmmod nvme_tcp 00:26:59.361 rmmod nvme_fabrics 00:26:59.361 rmmod nvme_keyring 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2464942 ']' 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2464942 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2464942 ']' 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2464942 00:26:59.361 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2464942) - No such process 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2464942 is not found' 00:26:59.361 Process with pid 2464942 is not found 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:59.361 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:59.362 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:01.908 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:01.908 00:27:01.908 real 0m10.296s 00:27:01.908 user 0m28.013s 00:27:01.908 sys 0m4.004s 00:27:01.908 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:01.908 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:01.908 ************************************ 00:27:01.908 END TEST nvmf_shutdown_tc4 00:27:01.908 ************************************ 00:27:01.908 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:27:01.908 00:27:01.908 real 0m43.897s 00:27:01.908 user 1m46.772s 00:27:01.908 sys 0m14.095s 00:27:01.908 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:01.908 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:01.908 ************************************ 00:27:01.908 END TEST nvmf_shutdown 00:27:01.908 ************************************ 00:27:01.908 07:22:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:27:01.908 07:22:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:01.908 07:22:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:01.908 07:22:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:01.908 ************************************ 00:27:01.908 START TEST nvmf_nsid 00:27:01.908 ************************************ 00:27:01.908 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:27:01.908 * Looking for test storage... 00:27:01.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:01.908 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:01.908 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:27:01.908 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:01.908 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:01.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.909 --rc genhtml_branch_coverage=1 00:27:01.909 --rc genhtml_function_coverage=1 00:27:01.909 --rc genhtml_legend=1 00:27:01.909 --rc geninfo_all_blocks=1 00:27:01.909 --rc geninfo_unexecuted_blocks=1 00:27:01.909 00:27:01.909 ' 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:01.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.909 --rc genhtml_branch_coverage=1 00:27:01.909 --rc genhtml_function_coverage=1 00:27:01.909 --rc genhtml_legend=1 00:27:01.909 --rc geninfo_all_blocks=1 00:27:01.909 --rc geninfo_unexecuted_blocks=1 00:27:01.909 00:27:01.909 ' 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:01.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.909 --rc genhtml_branch_coverage=1 00:27:01.909 --rc genhtml_function_coverage=1 00:27:01.909 --rc genhtml_legend=1 00:27:01.909 --rc geninfo_all_blocks=1 00:27:01.909 --rc geninfo_unexecuted_blocks=1 00:27:01.909 00:27:01.909 ' 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:01.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.909 --rc genhtml_branch_coverage=1 00:27:01.909 --rc genhtml_function_coverage=1 00:27:01.909 --rc genhtml_legend=1 00:27:01.909 --rc geninfo_all_blocks=1 00:27:01.909 --rc geninfo_unexecuted_blocks=1 00:27:01.909 00:27:01.909 ' 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:01.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:01.909 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:01.910 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:01.910 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:01.910 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:01.910 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:01.910 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:01.910 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:27:01.910 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:10.055 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:10.055 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:10.056 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:10.056 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:10.056 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:10.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:10.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.583 ms 00:27:10.056 00:27:10.056 --- 10.0.0.2 ping statistics --- 00:27:10.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.056 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:10.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:10.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:27:10.056 00:27:10.056 --- 10.0.0.1 ping statistics --- 00:27:10.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.056 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2470677 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2470677 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2470677 ']' 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:10.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:10.056 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:10.056 [2024-11-27 07:22:20.456363] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:27:10.056 [2024-11-27 07:22:20.456430] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:10.056 [2024-11-27 07:22:20.555574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.056 [2024-11-27 07:22:20.606479] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:10.056 [2024-11-27 07:22:20.606527] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:10.056 [2024-11-27 07:22:20.606536] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:10.056 [2024-11-27 07:22:20.606543] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:10.056 [2024-11-27 07:22:20.606549] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:10.056 [2024-11-27 07:22:20.607320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2470932 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=a11b967a-04ae-4d8c-bfca-d6067f920a3c 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=05beaa5c-d897-4b44-9283-6b4f239fac03 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=9a941419-8937-455d-8106-22c8e1485724 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:10.318 null0 00:27:10.318 null1 00:27:10.318 [2024-11-27 07:22:21.365018] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:27:10.318 [2024-11-27 07:22:21.365091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2470932 ] 00:27:10.318 null2 00:27:10.318 [2024-11-27 07:22:21.370608] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:10.318 [2024-11-27 07:22:21.394911] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2470932 /var/tmp/tgt2.sock 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2470932 ']' 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:27:10.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:10.318 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:10.318 [2024-11-27 07:22:21.459358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.318 [2024-11-27 07:22:21.515393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:10.580 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:10.580 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:27:10.580 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:27:11.150 [2024-11-27 07:22:22.079456] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:11.150 [2024-11-27 07:22:22.095641] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:27:11.150 nvme0n1 nvme0n2 00:27:11.150 nvme1n1 00:27:11.150 07:22:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:27:11.150 07:22:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:27:11.150 07:22:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:12.537 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:27:12.537 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:27:12.537 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:27:12.537 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:27:12.537 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:27:12.537 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:27:12.537 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:27:12.537 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:27:12.537 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:27:12.537 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:27:12.537 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:27:12.537 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:27:12.537 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:27:13.481 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:27:13.481 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:27:13.481 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:27:13.481 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:27:13.481 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:27:13.481 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid a11b967a-04ae-4d8c-bfca-d6067f920a3c 00:27:13.481 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:27:13.481 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:27:13.481 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:27:13.481 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:27:13.481 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:27:13.481 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=a11b967a04ae4d8cbfcad6067f920a3c 00:27:13.481 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo A11B967A04AE4D8CBFCAD6067F920A3C 00:27:13.481 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ A11B967A04AE4D8CBFCAD6067F920A3C == \A\1\1\B\9\6\7\A\0\4\A\E\4\D\8\C\B\F\C\A\D\6\0\6\7\F\9\2\0\A\3\C ]] 00:27:13.481 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:27:13.481 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:27:13.481 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:27:13.742 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:27:13.742 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:27:13.742 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:27:13.742 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:27:13.742 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 05beaa5c-d897-4b44-9283-6b4f239fac03 00:27:13.742 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:27:13.742 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:27:13.743 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:27:13.743 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:27:13.743 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:27:13.743 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=05beaa5cd8974b4492836b4f239fac03 00:27:13.743 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 05BEAA5CD8974B4492836B4F239FAC03 00:27:13.743 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 05BEAA5CD8974B4492836B4F239FAC03 == \0\5\B\E\A\A\5\C\D\8\9\7\4\B\4\4\9\2\8\3\6\B\4\F\2\3\9\F\A\C\0\3 ]] 00:27:13.743 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:27:13.743 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:27:13.743 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:27:13.743 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:27:13.743 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:27:13.743 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:27:13.743 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:27:13.743 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 9a941419-8937-455d-8106-22c8e1485724 00:27:13.743 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:27:13.743 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:27:13.743 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:27:13.743 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:27:13.743 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:27:13.743 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=9a9414198937455d810622c8e1485724 00:27:13.743 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 9A9414198937455D810622C8E1485724 00:27:13.743 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 9A9414198937455D810622C8E1485724 == \9\A\9\4\1\4\1\9\8\9\3\7\4\5\5\D\8\1\0\6\2\2\C\8\E\1\4\8\5\7\2\4 ]] 00:27:13.743 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:27:14.003 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:27:14.003 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:27:14.003 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2470932 00:27:14.003 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2470932 ']' 00:27:14.003 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2470932 00:27:14.003 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:27:14.003 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:14.003 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2470932 00:27:14.003 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:14.003 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:14.003 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2470932' 00:27:14.003 killing process with pid 2470932 00:27:14.003 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2470932 00:27:14.003 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2470932 00:27:14.264 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:27:14.264 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:14.264 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:27:14.264 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:14.264 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:27:14.264 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:14.264 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:14.264 rmmod nvme_tcp 00:27:14.264 rmmod nvme_fabrics 00:27:14.264 rmmod nvme_keyring 00:27:14.264 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:14.264 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:27:14.264 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:27:14.264 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2470677 ']' 00:27:14.264 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2470677 00:27:14.264 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2470677 ']' 00:27:14.264 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2470677 00:27:14.264 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:27:14.264 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:14.264 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2470677 00:27:14.264 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:14.264 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:14.264 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2470677' 00:27:14.264 killing process with pid 2470677 00:27:14.264 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2470677 00:27:14.264 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2470677 00:27:14.526 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:14.526 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:14.526 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:14.526 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:27:14.526 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:27:14.526 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:14.526 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:27:14.526 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:14.526 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:14.526 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.526 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:14.526 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.438 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:16.438 00:27:16.438 real 0m15.003s 00:27:16.438 user 0m11.541s 00:27:16.438 sys 0m6.842s 00:27:16.438 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:16.438 07:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:16.438 ************************************ 00:27:16.438 END TEST nvmf_nsid 00:27:16.438 ************************************ 00:27:16.699 07:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:27:16.699 00:27:16.699 real 13m7.925s 00:27:16.699 user 27m31.014s 00:27:16.699 sys 3m55.710s 00:27:16.699 07:22:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:16.699 07:22:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:16.699 ************************************ 00:27:16.699 END TEST nvmf_target_extra 00:27:16.699 ************************************ 00:27:16.699 07:22:27 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:27:16.699 07:22:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:16.699 07:22:27 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:16.699 07:22:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:16.699 ************************************ 00:27:16.699 START TEST nvmf_host 00:27:16.699 ************************************ 00:27:16.700 07:22:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:27:16.700 * Looking for test storage... 00:27:16.700 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:16.700 07:22:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:16.700 07:22:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:27:16.700 07:22:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:16.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.961 --rc genhtml_branch_coverage=1 00:27:16.961 --rc genhtml_function_coverage=1 00:27:16.961 --rc genhtml_legend=1 00:27:16.961 --rc geninfo_all_blocks=1 00:27:16.961 --rc geninfo_unexecuted_blocks=1 00:27:16.961 00:27:16.961 ' 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:16.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.961 --rc genhtml_branch_coverage=1 00:27:16.961 --rc genhtml_function_coverage=1 00:27:16.961 --rc genhtml_legend=1 00:27:16.961 --rc geninfo_all_blocks=1 00:27:16.961 --rc geninfo_unexecuted_blocks=1 00:27:16.961 00:27:16.961 ' 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:16.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.961 --rc genhtml_branch_coverage=1 00:27:16.961 --rc genhtml_function_coverage=1 00:27:16.961 --rc genhtml_legend=1 00:27:16.961 --rc geninfo_all_blocks=1 00:27:16.961 --rc geninfo_unexecuted_blocks=1 00:27:16.961 00:27:16.961 ' 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:16.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.961 --rc genhtml_branch_coverage=1 00:27:16.961 --rc genhtml_function_coverage=1 00:27:16.961 --rc genhtml_legend=1 00:27:16.961 --rc geninfo_all_blocks=1 00:27:16.961 --rc geninfo_unexecuted_blocks=1 00:27:16.961 00:27:16.961 ' 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:16.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:16.961 07:22:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.961 ************************************ 00:27:16.961 START TEST nvmf_multicontroller 00:27:16.961 ************************************ 00:27:16.961 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:16.961 * Looking for test storage... 00:27:16.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:16.961 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:16.961 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:27:16.961 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:17.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.223 --rc genhtml_branch_coverage=1 00:27:17.223 --rc genhtml_function_coverage=1 00:27:17.223 --rc genhtml_legend=1 00:27:17.223 --rc geninfo_all_blocks=1 00:27:17.223 --rc geninfo_unexecuted_blocks=1 00:27:17.223 00:27:17.223 ' 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:17.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.223 --rc genhtml_branch_coverage=1 00:27:17.223 --rc genhtml_function_coverage=1 00:27:17.223 --rc genhtml_legend=1 00:27:17.223 --rc geninfo_all_blocks=1 00:27:17.223 --rc geninfo_unexecuted_blocks=1 00:27:17.223 00:27:17.223 ' 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:17.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.223 --rc genhtml_branch_coverage=1 00:27:17.223 --rc genhtml_function_coverage=1 00:27:17.223 --rc genhtml_legend=1 00:27:17.223 --rc geninfo_all_blocks=1 00:27:17.223 --rc geninfo_unexecuted_blocks=1 00:27:17.223 00:27:17.223 ' 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:17.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.223 --rc genhtml_branch_coverage=1 00:27:17.223 --rc genhtml_function_coverage=1 00:27:17.223 --rc genhtml_legend=1 00:27:17.223 --rc geninfo_all_blocks=1 00:27:17.223 --rc geninfo_unexecuted_blocks=1 00:27:17.223 00:27:17.223 ' 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:17.223 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:17.224 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.224 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.224 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.224 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:17.224 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.224 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:27:17.224 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:17.224 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:17.224 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:17.224 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:17.224 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:17.224 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:17.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:17.224 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:17.224 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:17.224 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:17.224 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:17.224 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:17.224 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:17.224 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:17.224 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:17.224 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:17.224 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:17.224 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:17.224 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:17.224 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:17.224 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:17.224 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:17.224 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.224 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:17.224 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.224 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:17.224 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:17.224 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:27:17.224 07:22:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:25.369 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:25.369 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.369 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:25.370 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:25.370 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:25.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:25.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.700 ms 00:27:25.370 00:27:25.370 --- 10.0.0.2 ping statistics --- 00:27:25.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.370 rtt min/avg/max/mdev = 0.700/0.700/0.700/0.000 ms 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:25.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:25.370 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:27:25.370 00:27:25.370 --- 10.0.0.1 ping statistics --- 00:27:25.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.370 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2476032 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2476032 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2476032 ']' 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:25.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:25.370 07:22:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:25.370 [2024-11-27 07:22:35.671383] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:27:25.370 [2024-11-27 07:22:35.671450] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:25.370 [2024-11-27 07:22:35.772392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:25.370 [2024-11-27 07:22:35.825239] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:25.370 [2024-11-27 07:22:35.825295] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:25.370 [2024-11-27 07:22:35.825304] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:25.370 [2024-11-27 07:22:35.825311] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:25.370 [2024-11-27 07:22:35.825318] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:25.370 [2024-11-27 07:22:35.827406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:25.370 [2024-11-27 07:22:35.827679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:25.370 [2024-11-27 07:22:35.827680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.370 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:25.370 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:27:25.370 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:25.370 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:25.370 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:25.370 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:25.370 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:25.370 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.370 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:25.370 [2024-11-27 07:22:36.548326] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:25.370 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.370 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:25.370 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.370 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:25.632 Malloc0 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:25.632 [2024-11-27 07:22:36.623476] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:25.632 [2024-11-27 07:22:36.635353] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:25.632 Malloc1 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2476163 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2476163 /var/tmp/bdevperf.sock 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2476163 ']' 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:25.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:25.632 07:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:26.576 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:26.576 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:27:26.576 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:27:26.577 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.577 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:26.839 NVMe0n1 00:27:26.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:26.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:26.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:26.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.839 1 00:27:26.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:27:26.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:27:26.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:27:26.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:26.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:26.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:26.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:26.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:27:26.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:26.839 request: 00:27:26.839 { 00:27:26.839 "name": "NVMe0", 00:27:26.839 "trtype": "tcp", 00:27:26.839 "traddr": "10.0.0.2", 00:27:26.839 "adrfam": "ipv4", 00:27:26.839 "trsvcid": "4420", 00:27:26.839 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:26.839 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:26.839 "hostaddr": "10.0.0.1", 00:27:26.839 "prchk_reftag": false, 00:27:26.839 "prchk_guard": false, 00:27:26.839 "hdgst": false, 00:27:26.839 "ddgst": false, 00:27:26.839 "allow_unrecognized_csi": false, 00:27:26.839 "method": "bdev_nvme_attach_controller", 00:27:26.839 "req_id": 1 00:27:26.839 } 00:27:26.839 Got JSON-RPC error response 00:27:26.839 response: 00:27:26.839 { 00:27:26.839 "code": -114, 00:27:26.839 "message": "A controller named NVMe0 already exists with the specified network path" 00:27:26.839 } 00:27:26.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:26.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:27:26.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:26.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:26.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:26.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:27:26.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:27:26.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:27:26.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:26.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:26.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:26.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:26.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:27:26.839 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:26.840 request: 00:27:26.840 { 00:27:26.840 "name": "NVMe0", 00:27:26.840 "trtype": "tcp", 00:27:26.840 "traddr": "10.0.0.2", 00:27:26.840 "adrfam": "ipv4", 00:27:26.840 "trsvcid": "4420", 00:27:26.840 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:26.840 "hostaddr": "10.0.0.1", 00:27:26.840 "prchk_reftag": false, 00:27:26.840 "prchk_guard": false, 00:27:26.840 "hdgst": false, 00:27:26.840 "ddgst": false, 00:27:26.840 "allow_unrecognized_csi": false, 00:27:26.840 "method": "bdev_nvme_attach_controller", 00:27:26.840 "req_id": 1 00:27:26.840 } 00:27:26.840 Got JSON-RPC error response 00:27:26.840 response: 00:27:26.840 { 00:27:26.840 "code": -114, 00:27:26.840 "message": "A controller named NVMe0 already exists with the specified network path" 00:27:26.840 } 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:26.840 request: 00:27:26.840 { 00:27:26.840 "name": "NVMe0", 00:27:26.840 "trtype": "tcp", 00:27:26.840 "traddr": "10.0.0.2", 00:27:26.840 "adrfam": "ipv4", 00:27:26.840 "trsvcid": "4420", 00:27:26.840 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:26.840 "hostaddr": "10.0.0.1", 00:27:26.840 "prchk_reftag": false, 00:27:26.840 "prchk_guard": false, 00:27:26.840 "hdgst": false, 00:27:26.840 "ddgst": false, 00:27:26.840 "multipath": "disable", 00:27:26.840 "allow_unrecognized_csi": false, 00:27:26.840 "method": "bdev_nvme_attach_controller", 00:27:26.840 "req_id": 1 00:27:26.840 } 00:27:26.840 Got JSON-RPC error response 00:27:26.840 response: 00:27:26.840 { 00:27:26.840 "code": -114, 00:27:26.840 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:27:26.840 } 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:26.840 request: 00:27:26.840 { 00:27:26.840 "name": "NVMe0", 00:27:26.840 "trtype": "tcp", 00:27:26.840 "traddr": "10.0.0.2", 00:27:26.840 "adrfam": "ipv4", 00:27:26.840 "trsvcid": "4420", 00:27:26.840 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:26.840 "hostaddr": "10.0.0.1", 00:27:26.840 "prchk_reftag": false, 00:27:26.840 "prchk_guard": false, 00:27:26.840 "hdgst": false, 00:27:26.840 "ddgst": false, 00:27:26.840 "multipath": "failover", 00:27:26.840 "allow_unrecognized_csi": false, 00:27:26.840 "method": "bdev_nvme_attach_controller", 00:27:26.840 "req_id": 1 00:27:26.840 } 00:27:26.840 Got JSON-RPC error response 00:27:26.840 response: 00:27:26.840 { 00:27:26.840 "code": -114, 00:27:26.840 "message": "A controller named NVMe0 already exists with the specified network path" 00:27:26.840 } 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.840 07:22:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:27.102 NVMe0n1 00:27:27.102 07:22:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.102 07:22:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:27.103 07:22:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.103 07:22:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:27.103 07:22:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.103 07:22:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:27:27.103 07:22:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.103 07:22:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:27.364 00:27:27.364 07:22:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.364 07:22:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:27.364 07:22:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:27.364 07:22:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.364 07:22:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:27.364 07:22:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.364 07:22:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:27.364 07:22:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:28.307 { 00:27:28.307 "results": [ 00:27:28.307 { 00:27:28.307 "job": "NVMe0n1", 00:27:28.307 "core_mask": "0x1", 00:27:28.307 "workload": "write", 00:27:28.307 "status": "finished", 00:27:28.307 "queue_depth": 128, 00:27:28.307 "io_size": 4096, 00:27:28.307 "runtime": 1.006325, 00:27:28.307 "iops": 26721.983454649344, 00:27:28.307 "mibps": 104.382747869724, 00:27:28.307 "io_failed": 0, 00:27:28.307 "io_timeout": 0, 00:27:28.307 "avg_latency_us": 4778.459767208359, 00:27:28.307 "min_latency_us": 2116.266666666667, 00:27:28.307 "max_latency_us": 10813.44 00:27:28.307 } 00:27:28.307 ], 00:27:28.307 "core_count": 1 00:27:28.307 } 00:27:28.569 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:28.569 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.569 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:28.569 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.569 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:27:28.569 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2476163 00:27:28.569 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2476163 ']' 00:27:28.569 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2476163 00:27:28.569 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:27:28.569 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:28.569 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2476163 00:27:28.569 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:28.569 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:28.569 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2476163' 00:27:28.569 killing process with pid 2476163 00:27:28.569 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2476163 00:27:28.569 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2476163 00:27:28.569 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:28.569 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.569 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:28.569 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.569 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:28.569 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.570 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:28.570 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.570 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:27:28.570 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:28.570 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:27:28.570 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:27:28.570 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:27:28.570 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:27:28.570 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:28.570 [2024-11-27 07:22:36.766794] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:27:28.570 [2024-11-27 07:22:36.766874] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2476163 ] 00:27:28.570 [2024-11-27 07:22:36.859715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.570 [2024-11-27 07:22:36.913668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:28.570 [2024-11-27 07:22:38.374352] bdev.c:4926:bdev_name_add: *ERROR*: Bdev name bc8b99f4-521d-46e6-aa8d-ed98bfa8c2e5 already exists 00:27:28.570 [2024-11-27 07:22:38.374385] bdev.c:8146:bdev_register: *ERROR*: Unable to add uuid:bc8b99f4-521d-46e6-aa8d-ed98bfa8c2e5 alias for bdev NVMe1n1 00:27:28.570 [2024-11-27 07:22:38.374393] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:28.570 Running I/O for 1 seconds... 00:27:28.570 26713.00 IOPS, 104.35 MiB/s 00:27:28.570 Latency(us) 00:27:28.570 [2024-11-27T06:22:39.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:28.570 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:28.570 NVMe0n1 : 1.01 26721.98 104.38 0.00 0.00 4778.46 2116.27 10813.44 00:27:28.570 [2024-11-27T06:22:39.775Z] =================================================================================================================== 00:27:28.570 [2024-11-27T06:22:39.775Z] Total : 26721.98 104.38 0.00 0.00 4778.46 2116.27 10813.44 00:27:28.570 Received shutdown signal, test time was about 1.000000 seconds 00:27:28.570 00:27:28.570 Latency(us) 00:27:28.570 [2024-11-27T06:22:39.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:28.570 [2024-11-27T06:22:39.775Z] =================================================================================================================== 00:27:28.570 [2024-11-27T06:22:39.775Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:28.570 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:28.570 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:28.570 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:27:28.570 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:27:28.570 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:28.570 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:27:28.831 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:28.831 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:27:28.831 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:28.831 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:28.831 rmmod nvme_tcp 00:27:28.831 rmmod nvme_fabrics 00:27:28.831 rmmod nvme_keyring 00:27:28.831 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:28.831 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:27:28.831 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:27:28.831 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2476032 ']' 00:27:28.831 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2476032 00:27:28.831 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2476032 ']' 00:27:28.831 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2476032 00:27:28.831 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:27:28.831 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:28.831 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2476032 00:27:28.831 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:28.831 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:28.831 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2476032' 00:27:28.831 killing process with pid 2476032 00:27:28.831 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2476032 00:27:28.831 07:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2476032 00:27:28.831 07:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:28.831 07:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:28.831 07:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:28.831 07:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:27:28.831 07:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:27:28.831 07:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:28.831 07:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:27:29.092 07:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:29.092 07:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:29.092 07:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.092 07:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:29.092 07:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.009 07:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:31.009 00:27:31.009 real 0m14.085s 00:27:31.009 user 0m18.032s 00:27:31.009 sys 0m6.464s 00:27:31.009 07:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:31.009 07:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:31.009 ************************************ 00:27:31.009 END TEST nvmf_multicontroller 00:27:31.009 ************************************ 00:27:31.009 07:22:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:31.009 07:22:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:31.009 07:22:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:31.009 07:22:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.009 ************************************ 00:27:31.009 START TEST nvmf_aer 00:27:31.009 ************************************ 00:27:31.009 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:31.269 * Looking for test storage... 00:27:31.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:31.269 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:31.269 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:27:31.269 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:31.269 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:31.269 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:31.269 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:31.269 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:31.269 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:27:31.269 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:27:31.269 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:27:31.269 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:27:31.269 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:27:31.269 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:27:31.269 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:27:31.269 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:31.269 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:27:31.269 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:27:31.269 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:31.269 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:31.269 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:27:31.269 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:27:31.269 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:31.269 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:31.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.270 --rc genhtml_branch_coverage=1 00:27:31.270 --rc genhtml_function_coverage=1 00:27:31.270 --rc genhtml_legend=1 00:27:31.270 --rc geninfo_all_blocks=1 00:27:31.270 --rc geninfo_unexecuted_blocks=1 00:27:31.270 00:27:31.270 ' 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:31.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.270 --rc genhtml_branch_coverage=1 00:27:31.270 --rc genhtml_function_coverage=1 00:27:31.270 --rc genhtml_legend=1 00:27:31.270 --rc geninfo_all_blocks=1 00:27:31.270 --rc geninfo_unexecuted_blocks=1 00:27:31.270 00:27:31.270 ' 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:31.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.270 --rc genhtml_branch_coverage=1 00:27:31.270 --rc genhtml_function_coverage=1 00:27:31.270 --rc genhtml_legend=1 00:27:31.270 --rc geninfo_all_blocks=1 00:27:31.270 --rc geninfo_unexecuted_blocks=1 00:27:31.270 00:27:31.270 ' 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:31.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.270 --rc genhtml_branch_coverage=1 00:27:31.270 --rc genhtml_function_coverage=1 00:27:31.270 --rc genhtml_legend=1 00:27:31.270 --rc geninfo_all_blocks=1 00:27:31.270 --rc geninfo_unexecuted_blocks=1 00:27:31.270 00:27:31.270 ' 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:31.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:27:31.270 07:22:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:39.413 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:39.413 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:27:39.413 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:39.413 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:39.413 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:39.413 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:39.413 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:39.413 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:27:39.413 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:39.413 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:27:39.413 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:27:39.413 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:27:39.413 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:27:39.413 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:27:39.413 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:27:39.413 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:39.413 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:39.414 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:39.414 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:39.414 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:39.414 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:39.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:39.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.537 ms 00:27:39.414 00:27:39.414 --- 10.0.0.2 ping statistics --- 00:27:39.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.414 rtt min/avg/max/mdev = 0.537/0.537/0.537/0.000 ms 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:39.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:39.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:27:39.414 00:27:39.414 --- 10.0.0.1 ping statistics --- 00:27:39.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.414 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2481014 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2481014 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2481014 ']' 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:39.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:39.414 07:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:39.414 [2024-11-27 07:22:50.050589] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:27:39.415 [2024-11-27 07:22:50.050662] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:39.415 [2024-11-27 07:22:50.151400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:39.415 [2024-11-27 07:22:50.206478] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:39.415 [2024-11-27 07:22:50.206532] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:39.415 [2024-11-27 07:22:50.206541] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:39.415 [2024-11-27 07:22:50.206548] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:39.415 [2024-11-27 07:22:50.206554] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:39.415 [2024-11-27 07:22:50.208871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:39.415 [2024-11-27 07:22:50.209022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:39.415 [2024-11-27 07:22:50.209150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.415 [2024-11-27 07:22:50.209151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:39.988 07:22:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:39.988 07:22:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:27:39.988 07:22:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:39.988 07:22:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:39.988 07:22:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:39.988 07:22:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:39.988 07:22:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:39.988 07:22:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.988 07:22:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:39.988 [2024-11-27 07:22:50.935099] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:39.988 07:22:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.988 07:22:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:27:39.988 07:22:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.988 07:22:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:39.988 Malloc0 00:27:39.988 07:22:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.988 07:22:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:27:39.988 07:22:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.988 07:22:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:39.988 07:22:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.988 07:22:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:39.988 07:22:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.988 07:22:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:39.988 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.989 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:39.989 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.989 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:39.989 [2024-11-27 07:22:51.012884] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:39.989 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.989 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:27:39.989 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.989 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:39.989 [ 00:27:39.989 { 00:27:39.989 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:39.989 "subtype": "Discovery", 00:27:39.989 "listen_addresses": [], 00:27:39.989 "allow_any_host": true, 00:27:39.989 "hosts": [] 00:27:39.989 }, 00:27:39.989 { 00:27:39.989 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:39.989 "subtype": "NVMe", 00:27:39.989 "listen_addresses": [ 00:27:39.989 { 00:27:39.989 "trtype": "TCP", 00:27:39.989 "adrfam": "IPv4", 00:27:39.989 "traddr": "10.0.0.2", 00:27:39.989 "trsvcid": "4420" 00:27:39.989 } 00:27:39.989 ], 00:27:39.989 "allow_any_host": true, 00:27:39.989 "hosts": [], 00:27:39.989 "serial_number": "SPDK00000000000001", 00:27:39.989 "model_number": "SPDK bdev Controller", 00:27:39.989 "max_namespaces": 2, 00:27:39.989 "min_cntlid": 1, 00:27:39.989 "max_cntlid": 65519, 00:27:39.989 "namespaces": [ 00:27:39.989 { 00:27:39.989 "nsid": 1, 00:27:39.989 "bdev_name": "Malloc0", 00:27:39.989 "name": "Malloc0", 00:27:39.989 "nguid": "722E2CB5F92A43C58FE2007C0F5D32FD", 00:27:39.989 "uuid": "722e2cb5-f92a-43c5-8fe2-007c0f5d32fd" 00:27:39.989 } 00:27:39.989 ] 00:27:39.989 } 00:27:39.989 ] 00:27:39.989 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.989 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:39.989 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:27:39.989 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2481195 00:27:39.989 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:27:39.989 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:27:39.989 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:27:39.989 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:39.989 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:27:39.989 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:27:39.989 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:27:39.989 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:39.989 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:27:39.989 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:27:39.989 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:27:40.250 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:40.250 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:27:40.250 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:27:40.250 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:27:40.250 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:40.250 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:40.250 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:27:40.250 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:27:40.250 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.250 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:40.250 Malloc1 00:27:40.250 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.250 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:27:40.250 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.250 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:40.250 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.250 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:27:40.250 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.250 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:40.250 Asynchronous Event Request test 00:27:40.250 Attaching to 10.0.0.2 00:27:40.250 Attached to 10.0.0.2 00:27:40.250 Registering asynchronous event callbacks... 00:27:40.250 Starting namespace attribute notice tests for all controllers... 00:27:40.250 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:40.250 aer_cb - Changed Namespace 00:27:40.250 Cleaning up... 00:27:40.250 [ 00:27:40.250 { 00:27:40.250 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:40.250 "subtype": "Discovery", 00:27:40.250 "listen_addresses": [], 00:27:40.250 "allow_any_host": true, 00:27:40.250 "hosts": [] 00:27:40.250 }, 00:27:40.250 { 00:27:40.250 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:40.250 "subtype": "NVMe", 00:27:40.250 "listen_addresses": [ 00:27:40.250 { 00:27:40.250 "trtype": "TCP", 00:27:40.250 "adrfam": "IPv4", 00:27:40.250 "traddr": "10.0.0.2", 00:27:40.250 "trsvcid": "4420" 00:27:40.250 } 00:27:40.250 ], 00:27:40.250 "allow_any_host": true, 00:27:40.250 "hosts": [], 00:27:40.250 "serial_number": "SPDK00000000000001", 00:27:40.250 "model_number": "SPDK bdev Controller", 00:27:40.250 "max_namespaces": 2, 00:27:40.250 "min_cntlid": 1, 00:27:40.250 "max_cntlid": 65519, 00:27:40.250 "namespaces": [ 00:27:40.250 { 00:27:40.250 "nsid": 1, 00:27:40.250 "bdev_name": "Malloc0", 00:27:40.250 "name": "Malloc0", 00:27:40.250 "nguid": "722E2CB5F92A43C58FE2007C0F5D32FD", 00:27:40.250 "uuid": "722e2cb5-f92a-43c5-8fe2-007c0f5d32fd" 00:27:40.250 }, 00:27:40.250 { 00:27:40.250 "nsid": 2, 00:27:40.250 "bdev_name": "Malloc1", 00:27:40.250 "name": "Malloc1", 00:27:40.250 "nguid": "FC692165CDFB4057828826EEC4BCB6D2", 00:27:40.250 "uuid": "fc692165-cdfb-4057-8288-26eec4bcb6d2" 00:27:40.250 } 00:27:40.250 ] 00:27:40.250 } 00:27:40.250 ] 00:27:40.250 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.250 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2481195 00:27:40.250 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:40.250 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.250 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:40.513 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.513 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:27:40.513 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.513 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:40.513 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.513 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:40.513 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.513 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:40.513 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.513 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:27:40.513 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:27:40.513 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:40.513 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:27:40.513 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:40.513 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:27:40.513 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:40.513 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:40.513 rmmod nvme_tcp 00:27:40.513 rmmod nvme_fabrics 00:27:40.513 rmmod nvme_keyring 00:27:40.513 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:40.513 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:27:40.513 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:27:40.513 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2481014 ']' 00:27:40.513 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2481014 00:27:40.513 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2481014 ']' 00:27:40.513 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2481014 00:27:40.513 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:27:40.513 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:40.513 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2481014 00:27:40.513 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:40.513 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:40.513 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2481014' 00:27:40.513 killing process with pid 2481014 00:27:40.513 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2481014 00:27:40.513 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2481014 00:27:40.775 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:40.775 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:40.775 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:40.775 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:27:40.775 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:27:40.775 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:40.775 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:27:40.775 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:40.775 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:40.775 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:40.775 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:40.776 07:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:42.702 07:22:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:42.963 00:27:42.963 real 0m11.706s 00:27:42.963 user 0m8.538s 00:27:42.963 sys 0m6.372s 00:27:42.963 07:22:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:42.963 07:22:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:42.963 ************************************ 00:27:42.963 END TEST nvmf_aer 00:27:42.963 ************************************ 00:27:42.963 07:22:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:42.963 07:22:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:42.963 07:22:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:42.963 07:22:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.963 ************************************ 00:27:42.963 START TEST nvmf_async_init 00:27:42.963 ************************************ 00:27:42.963 07:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:42.963 * Looking for test storage... 00:27:42.963 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:42.963 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:42.963 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:27:42.963 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:43.225 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:43.225 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:43.225 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:43.225 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:43.225 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:27:43.225 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:27:43.225 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:27:43.225 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:27:43.225 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:27:43.225 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:27:43.225 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:27:43.225 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:43.225 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:27:43.225 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:27:43.225 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:43.225 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:43.225 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:27:43.225 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:27:43.225 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:43.225 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:27:43.225 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:27:43.225 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:27:43.225 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:27:43.225 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:43.225 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:27:43.225 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:27:43.225 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:43.225 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:43.225 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:27:43.225 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:43.225 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:43.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.225 --rc genhtml_branch_coverage=1 00:27:43.225 --rc genhtml_function_coverage=1 00:27:43.225 --rc genhtml_legend=1 00:27:43.225 --rc geninfo_all_blocks=1 00:27:43.225 --rc geninfo_unexecuted_blocks=1 00:27:43.225 00:27:43.225 ' 00:27:43.225 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:43.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.225 --rc genhtml_branch_coverage=1 00:27:43.225 --rc genhtml_function_coverage=1 00:27:43.225 --rc genhtml_legend=1 00:27:43.225 --rc geninfo_all_blocks=1 00:27:43.225 --rc geninfo_unexecuted_blocks=1 00:27:43.225 00:27:43.225 ' 00:27:43.225 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:43.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.226 --rc genhtml_branch_coverage=1 00:27:43.226 --rc genhtml_function_coverage=1 00:27:43.226 --rc genhtml_legend=1 00:27:43.226 --rc geninfo_all_blocks=1 00:27:43.226 --rc geninfo_unexecuted_blocks=1 00:27:43.226 00:27:43.226 ' 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:43.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.226 --rc genhtml_branch_coverage=1 00:27:43.226 --rc genhtml_function_coverage=1 00:27:43.226 --rc genhtml_legend=1 00:27:43.226 --rc geninfo_all_blocks=1 00:27:43.226 --rc geninfo_unexecuted_blocks=1 00:27:43.226 00:27:43.226 ' 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:43.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=d5cb6b30842b41e48e1173c0b6ffb236 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:27:43.226 07:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:51.369 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:51.369 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:51.369 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:51.369 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:51.369 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:51.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:51.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.704 ms 00:27:51.370 00:27:51.370 --- 10.0.0.2 ping statistics --- 00:27:51.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.370 rtt min/avg/max/mdev = 0.704/0.704/0.704/0.000 ms 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:51.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:51.370 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:27:51.370 00:27:51.370 --- 10.0.0.1 ping statistics --- 00:27:51.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.370 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2485521 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2485521 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2485521 ']' 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:51.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:51.370 07:23:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:51.370 [2024-11-27 07:23:01.848141] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:27:51.370 [2024-11-27 07:23:01.848216] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:51.370 [2024-11-27 07:23:01.946784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.370 [2024-11-27 07:23:01.998364] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:51.370 [2024-11-27 07:23:01.998413] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:51.370 [2024-11-27 07:23:01.998422] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:51.370 [2024-11-27 07:23:01.998429] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:51.370 [2024-11-27 07:23:01.998435] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:51.370 [2024-11-27 07:23:01.999189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.633 07:23:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:51.633 07:23:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:27:51.633 07:23:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:51.633 07:23:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:51.633 07:23:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:51.633 07:23:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:51.633 07:23:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:51.633 07:23:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.633 07:23:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:51.633 [2024-11-27 07:23:02.708722] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:51.633 07:23:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.633 07:23:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:51.633 07:23:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.633 07:23:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:51.633 null0 00:27:51.633 07:23:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.633 07:23:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:51.633 07:23:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.633 07:23:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:51.633 07:23:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.633 07:23:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:51.633 07:23:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.633 07:23:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:51.633 07:23:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.633 07:23:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g d5cb6b30842b41e48e1173c0b6ffb236 00:27:51.633 07:23:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.633 07:23:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:51.633 07:23:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.633 07:23:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:51.633 07:23:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.633 07:23:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:51.633 [2024-11-27 07:23:02.769102] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:51.633 07:23:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.633 07:23:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:51.633 07:23:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.633 07:23:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:51.894 nvme0n1 00:27:51.894 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.894 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:51.894 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.894 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:51.894 [ 00:27:51.894 { 00:27:51.894 "name": "nvme0n1", 00:27:51.894 "aliases": [ 00:27:51.894 "d5cb6b30-842b-41e4-8e11-73c0b6ffb236" 00:27:51.894 ], 00:27:51.894 "product_name": "NVMe disk", 00:27:51.894 "block_size": 512, 00:27:51.894 "num_blocks": 2097152, 00:27:51.894 "uuid": "d5cb6b30-842b-41e4-8e11-73c0b6ffb236", 00:27:51.894 "numa_id": 0, 00:27:51.894 "assigned_rate_limits": { 00:27:51.895 "rw_ios_per_sec": 0, 00:27:51.895 "rw_mbytes_per_sec": 0, 00:27:51.895 "r_mbytes_per_sec": 0, 00:27:51.895 "w_mbytes_per_sec": 0 00:27:51.895 }, 00:27:51.895 "claimed": false, 00:27:51.895 "zoned": false, 00:27:51.895 "supported_io_types": { 00:27:51.895 "read": true, 00:27:51.895 "write": true, 00:27:51.895 "unmap": false, 00:27:51.895 "flush": true, 00:27:51.895 "reset": true, 00:27:51.895 "nvme_admin": true, 00:27:51.895 "nvme_io": true, 00:27:51.895 "nvme_io_md": false, 00:27:51.895 "write_zeroes": true, 00:27:51.895 "zcopy": false, 00:27:51.895 "get_zone_info": false, 00:27:51.895 "zone_management": false, 00:27:51.895 "zone_append": false, 00:27:51.895 "compare": true, 00:27:51.895 "compare_and_write": true, 00:27:51.895 "abort": true, 00:27:51.895 "seek_hole": false, 00:27:51.895 "seek_data": false, 00:27:51.895 "copy": true, 00:27:51.895 "nvme_iov_md": false 00:27:51.895 }, 00:27:51.895 "memory_domains": [ 00:27:51.895 { 00:27:51.895 "dma_device_id": "system", 00:27:51.895 "dma_device_type": 1 00:27:51.895 } 00:27:51.895 ], 00:27:51.895 "driver_specific": { 00:27:51.895 "nvme": [ 00:27:51.895 { 00:27:51.895 "trid": { 00:27:51.895 "trtype": "TCP", 00:27:51.895 "adrfam": "IPv4", 00:27:51.895 "traddr": "10.0.0.2", 00:27:51.895 "trsvcid": "4420", 00:27:51.895 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:51.895 }, 00:27:51.895 "ctrlr_data": { 00:27:51.895 "cntlid": 1, 00:27:51.895 "vendor_id": "0x8086", 00:27:51.895 "model_number": "SPDK bdev Controller", 00:27:51.895 "serial_number": "00000000000000000000", 00:27:51.895 "firmware_revision": "25.01", 00:27:51.895 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:51.895 "oacs": { 00:27:51.895 "security": 0, 00:27:51.895 "format": 0, 00:27:51.895 "firmware": 0, 00:27:51.895 "ns_manage": 0 00:27:51.895 }, 00:27:51.895 "multi_ctrlr": true, 00:27:51.895 "ana_reporting": false 00:27:51.895 }, 00:27:51.895 "vs": { 00:27:51.895 "nvme_version": "1.3" 00:27:51.895 }, 00:27:51.895 "ns_data": { 00:27:51.895 "id": 1, 00:27:51.895 "can_share": true 00:27:51.895 } 00:27:51.895 } 00:27:51.895 ], 00:27:51.895 "mp_policy": "active_passive" 00:27:51.895 } 00:27:51.895 } 00:27:51.895 ] 00:27:51.895 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.895 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:51.895 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.895 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:51.895 [2024-11-27 07:23:03.046831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:51.895 [2024-11-27 07:23:03.046920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d16f50 (9): Bad file descriptor 00:27:52.156 [2024-11-27 07:23:03.179264] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:27:52.156 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.156 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:52.156 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.156 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.156 [ 00:27:52.156 { 00:27:52.156 "name": "nvme0n1", 00:27:52.156 "aliases": [ 00:27:52.156 "d5cb6b30-842b-41e4-8e11-73c0b6ffb236" 00:27:52.156 ], 00:27:52.156 "product_name": "NVMe disk", 00:27:52.156 "block_size": 512, 00:27:52.156 "num_blocks": 2097152, 00:27:52.156 "uuid": "d5cb6b30-842b-41e4-8e11-73c0b6ffb236", 00:27:52.156 "numa_id": 0, 00:27:52.156 "assigned_rate_limits": { 00:27:52.156 "rw_ios_per_sec": 0, 00:27:52.156 "rw_mbytes_per_sec": 0, 00:27:52.156 "r_mbytes_per_sec": 0, 00:27:52.156 "w_mbytes_per_sec": 0 00:27:52.156 }, 00:27:52.156 "claimed": false, 00:27:52.156 "zoned": false, 00:27:52.156 "supported_io_types": { 00:27:52.156 "read": true, 00:27:52.156 "write": true, 00:27:52.156 "unmap": false, 00:27:52.156 "flush": true, 00:27:52.156 "reset": true, 00:27:52.156 "nvme_admin": true, 00:27:52.156 "nvme_io": true, 00:27:52.156 "nvme_io_md": false, 00:27:52.156 "write_zeroes": true, 00:27:52.156 "zcopy": false, 00:27:52.156 "get_zone_info": false, 00:27:52.156 "zone_management": false, 00:27:52.156 "zone_append": false, 00:27:52.156 "compare": true, 00:27:52.156 "compare_and_write": true, 00:27:52.156 "abort": true, 00:27:52.156 "seek_hole": false, 00:27:52.156 "seek_data": false, 00:27:52.156 "copy": true, 00:27:52.156 "nvme_iov_md": false 00:27:52.156 }, 00:27:52.156 "memory_domains": [ 00:27:52.156 { 00:27:52.156 "dma_device_id": "system", 00:27:52.156 "dma_device_type": 1 00:27:52.156 } 00:27:52.156 ], 00:27:52.156 "driver_specific": { 00:27:52.156 "nvme": [ 00:27:52.156 { 00:27:52.156 "trid": { 00:27:52.156 "trtype": "TCP", 00:27:52.156 "adrfam": "IPv4", 00:27:52.156 "traddr": "10.0.0.2", 00:27:52.156 "trsvcid": "4420", 00:27:52.156 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:52.156 }, 00:27:52.156 "ctrlr_data": { 00:27:52.156 "cntlid": 2, 00:27:52.156 "vendor_id": "0x8086", 00:27:52.156 "model_number": "SPDK bdev Controller", 00:27:52.156 "serial_number": "00000000000000000000", 00:27:52.156 "firmware_revision": "25.01", 00:27:52.156 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:52.156 "oacs": { 00:27:52.156 "security": 0, 00:27:52.156 "format": 0, 00:27:52.156 "firmware": 0, 00:27:52.156 "ns_manage": 0 00:27:52.156 }, 00:27:52.156 "multi_ctrlr": true, 00:27:52.156 "ana_reporting": false 00:27:52.156 }, 00:27:52.156 "vs": { 00:27:52.156 "nvme_version": "1.3" 00:27:52.156 }, 00:27:52.156 "ns_data": { 00:27:52.156 "id": 1, 00:27:52.156 "can_share": true 00:27:52.156 } 00:27:52.156 } 00:27:52.156 ], 00:27:52.156 "mp_policy": "active_passive" 00:27:52.156 } 00:27:52.156 } 00:27:52.156 ] 00:27:52.156 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.156 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.156 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.156 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.156 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.156 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:27:52.156 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.Mr602XP8uk 00:27:52.156 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:52.156 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.Mr602XP8uk 00:27:52.156 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.Mr602XP8uk 00:27:52.156 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.156 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.156 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.156 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:52.156 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.156 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.156 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.156 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:27:52.156 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.156 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.156 [2024-11-27 07:23:03.267637] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:52.156 [2024-11-27 07:23:03.267808] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:52.156 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.156 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:27:52.156 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.156 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.156 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.156 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:27:52.156 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.156 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.156 [2024-11-27 07:23:03.291715] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:52.418 nvme0n1 00:27:52.418 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.418 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:52.418 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.418 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.418 [ 00:27:52.418 { 00:27:52.418 "name": "nvme0n1", 00:27:52.418 "aliases": [ 00:27:52.418 "d5cb6b30-842b-41e4-8e11-73c0b6ffb236" 00:27:52.418 ], 00:27:52.418 "product_name": "NVMe disk", 00:27:52.418 "block_size": 512, 00:27:52.418 "num_blocks": 2097152, 00:27:52.418 "uuid": "d5cb6b30-842b-41e4-8e11-73c0b6ffb236", 00:27:52.418 "numa_id": 0, 00:27:52.418 "assigned_rate_limits": { 00:27:52.418 "rw_ios_per_sec": 0, 00:27:52.418 "rw_mbytes_per_sec": 0, 00:27:52.418 "r_mbytes_per_sec": 0, 00:27:52.418 "w_mbytes_per_sec": 0 00:27:52.418 }, 00:27:52.418 "claimed": false, 00:27:52.418 "zoned": false, 00:27:52.418 "supported_io_types": { 00:27:52.418 "read": true, 00:27:52.418 "write": true, 00:27:52.418 "unmap": false, 00:27:52.418 "flush": true, 00:27:52.418 "reset": true, 00:27:52.418 "nvme_admin": true, 00:27:52.418 "nvme_io": true, 00:27:52.418 "nvme_io_md": false, 00:27:52.418 "write_zeroes": true, 00:27:52.418 "zcopy": false, 00:27:52.418 "get_zone_info": false, 00:27:52.418 "zone_management": false, 00:27:52.418 "zone_append": false, 00:27:52.418 "compare": true, 00:27:52.418 "compare_and_write": true, 00:27:52.418 "abort": true, 00:27:52.418 "seek_hole": false, 00:27:52.418 "seek_data": false, 00:27:52.418 "copy": true, 00:27:52.418 "nvme_iov_md": false 00:27:52.418 }, 00:27:52.418 "memory_domains": [ 00:27:52.418 { 00:27:52.418 "dma_device_id": "system", 00:27:52.418 "dma_device_type": 1 00:27:52.418 } 00:27:52.418 ], 00:27:52.418 "driver_specific": { 00:27:52.418 "nvme": [ 00:27:52.418 { 00:27:52.418 "trid": { 00:27:52.418 "trtype": "TCP", 00:27:52.418 "adrfam": "IPv4", 00:27:52.418 "traddr": "10.0.0.2", 00:27:52.418 "trsvcid": "4421", 00:27:52.418 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:52.418 }, 00:27:52.418 "ctrlr_data": { 00:27:52.418 "cntlid": 3, 00:27:52.418 "vendor_id": "0x8086", 00:27:52.418 "model_number": "SPDK bdev Controller", 00:27:52.418 "serial_number": "00000000000000000000", 00:27:52.418 "firmware_revision": "25.01", 00:27:52.418 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:52.418 "oacs": { 00:27:52.418 "security": 0, 00:27:52.418 "format": 0, 00:27:52.418 "firmware": 0, 00:27:52.418 "ns_manage": 0 00:27:52.418 }, 00:27:52.418 "multi_ctrlr": true, 00:27:52.418 "ana_reporting": false 00:27:52.418 }, 00:27:52.418 "vs": { 00:27:52.418 "nvme_version": "1.3" 00:27:52.418 }, 00:27:52.418 "ns_data": { 00:27:52.418 "id": 1, 00:27:52.418 "can_share": true 00:27:52.418 } 00:27:52.418 } 00:27:52.418 ], 00:27:52.418 "mp_policy": "active_passive" 00:27:52.418 } 00:27:52.418 } 00:27:52.418 ] 00:27:52.418 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.418 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.418 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.418 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.418 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.418 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.Mr602XP8uk 00:27:52.418 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:27:52.418 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:27:52.418 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:52.418 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:27:52.418 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:52.418 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:27:52.418 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:52.418 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:52.418 rmmod nvme_tcp 00:27:52.418 rmmod nvme_fabrics 00:27:52.418 rmmod nvme_keyring 00:27:52.418 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:52.418 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:27:52.418 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:27:52.418 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2485521 ']' 00:27:52.418 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2485521 00:27:52.418 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2485521 ']' 00:27:52.418 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2485521 00:27:52.418 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:27:52.418 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:52.418 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2485521 00:27:52.418 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:52.418 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:52.418 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2485521' 00:27:52.418 killing process with pid 2485521 00:27:52.418 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2485521 00:27:52.418 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2485521 00:27:52.679 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:52.679 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:52.679 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:52.679 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:27:52.679 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:27:52.679 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:52.679 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:27:52.679 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:52.679 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:52.679 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.679 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:52.679 07:23:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.663 07:23:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:54.663 00:27:54.663 real 0m11.810s 00:27:54.663 user 0m4.325s 00:27:54.663 sys 0m6.052s 00:27:54.663 07:23:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:54.663 07:23:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:54.663 ************************************ 00:27:54.663 END TEST nvmf_async_init 00:27:54.663 ************************************ 00:27:54.663 07:23:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:54.663 07:23:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:54.663 07:23:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:54.663 07:23:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.924 ************************************ 00:27:54.924 START TEST dma 00:27:54.924 ************************************ 00:27:54.925 07:23:05 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:54.925 * Looking for test storage... 00:27:54.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:54.925 07:23:05 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:54.925 07:23:05 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:27:54.925 07:23:05 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:54.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:54.925 --rc genhtml_branch_coverage=1 00:27:54.925 --rc genhtml_function_coverage=1 00:27:54.925 --rc genhtml_legend=1 00:27:54.925 --rc geninfo_all_blocks=1 00:27:54.925 --rc geninfo_unexecuted_blocks=1 00:27:54.925 00:27:54.925 ' 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:54.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:54.925 --rc genhtml_branch_coverage=1 00:27:54.925 --rc genhtml_function_coverage=1 00:27:54.925 --rc genhtml_legend=1 00:27:54.925 --rc geninfo_all_blocks=1 00:27:54.925 --rc geninfo_unexecuted_blocks=1 00:27:54.925 00:27:54.925 ' 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:54.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:54.925 --rc genhtml_branch_coverage=1 00:27:54.925 --rc genhtml_function_coverage=1 00:27:54.925 --rc genhtml_legend=1 00:27:54.925 --rc geninfo_all_blocks=1 00:27:54.925 --rc geninfo_unexecuted_blocks=1 00:27:54.925 00:27:54.925 ' 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:54.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:54.925 --rc genhtml_branch_coverage=1 00:27:54.925 --rc genhtml_function_coverage=1 00:27:54.925 --rc genhtml_legend=1 00:27:54.925 --rc geninfo_all_blocks=1 00:27:54.925 --rc geninfo_unexecuted_blocks=1 00:27:54.925 00:27:54.925 ' 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:54.925 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:27:54.925 00:27:54.925 real 0m0.239s 00:27:54.925 user 0m0.137s 00:27:54.925 sys 0m0.116s 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:54.925 07:23:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:54.925 ************************************ 00:27:54.925 END TEST dma 00:27:54.925 ************************************ 00:27:55.187 07:23:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:55.187 07:23:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:55.187 07:23:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:55.187 07:23:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.187 ************************************ 00:27:55.187 START TEST nvmf_identify 00:27:55.187 ************************************ 00:27:55.187 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:55.187 * Looking for test storage... 00:27:55.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:55.187 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:55.187 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:27:55.187 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:55.187 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:55.187 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:55.187 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:55.187 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:55.187 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:27:55.187 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:27:55.187 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:27:55.187 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:27:55.187 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:27:55.187 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:27:55.187 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:27:55.187 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:55.187 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:27:55.187 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:27:55.187 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:55.187 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:55.187 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:27:55.449 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:27:55.449 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:55.449 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:27:55.449 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:27:55.449 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:27:55.449 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:27:55.449 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:55.449 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:27:55.449 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:27:55.449 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:55.449 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:55.449 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:27:55.449 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:55.449 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:55.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.449 --rc genhtml_branch_coverage=1 00:27:55.449 --rc genhtml_function_coverage=1 00:27:55.449 --rc genhtml_legend=1 00:27:55.449 --rc geninfo_all_blocks=1 00:27:55.449 --rc geninfo_unexecuted_blocks=1 00:27:55.449 00:27:55.449 ' 00:27:55.449 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:55.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.449 --rc genhtml_branch_coverage=1 00:27:55.449 --rc genhtml_function_coverage=1 00:27:55.449 --rc genhtml_legend=1 00:27:55.449 --rc geninfo_all_blocks=1 00:27:55.449 --rc geninfo_unexecuted_blocks=1 00:27:55.449 00:27:55.449 ' 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:55.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.450 --rc genhtml_branch_coverage=1 00:27:55.450 --rc genhtml_function_coverage=1 00:27:55.450 --rc genhtml_legend=1 00:27:55.450 --rc geninfo_all_blocks=1 00:27:55.450 --rc geninfo_unexecuted_blocks=1 00:27:55.450 00:27:55.450 ' 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:55.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.450 --rc genhtml_branch_coverage=1 00:27:55.450 --rc genhtml_function_coverage=1 00:27:55.450 --rc genhtml_legend=1 00:27:55.450 --rc geninfo_all_blocks=1 00:27:55.450 --rc geninfo_unexecuted_blocks=1 00:27:55.450 00:27:55.450 ' 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:55.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:27:55.450 07:23:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:03.595 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:03.595 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:28:03.595 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:03.595 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:03.595 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:03.595 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:03.595 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:03.595 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:28:03.595 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:03.595 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:28:03.595 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:28:03.595 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:28:03.595 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:28:03.595 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:28:03.595 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:28:03.595 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:03.595 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:03.595 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:03.595 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:03.595 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:03.595 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:03.595 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:03.595 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:03.595 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:03.595 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:03.595 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:03.595 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:03.595 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:03.595 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:03.595 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:03.595 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:03.595 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:03.595 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:03.595 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:03.596 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:03.596 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:03.596 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:03.596 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:03.596 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:03.596 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:28:03.596 00:28:03.596 --- 10.0.0.2 ping statistics --- 00:28:03.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.596 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:03.596 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:03.596 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:28:03.596 00:28:03.596 --- 10.0.0.1 ping statistics --- 00:28:03.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.596 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2490258 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2490258 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2490258 ']' 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:03.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:03.596 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:03.596 [2024-11-27 07:23:14.031752] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:28:03.596 [2024-11-27 07:23:14.031817] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:03.596 [2024-11-27 07:23:14.131752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:03.596 [2024-11-27 07:23:14.185687] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:03.596 [2024-11-27 07:23:14.185740] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:03.596 [2024-11-27 07:23:14.185749] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:03.596 [2024-11-27 07:23:14.185756] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:03.596 [2024-11-27 07:23:14.185762] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:03.596 [2024-11-27 07:23:14.187809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:03.596 [2024-11-27 07:23:14.187970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:03.596 [2024-11-27 07:23:14.188138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.596 [2024-11-27 07:23:14.188138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:03.858 07:23:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:03.858 07:23:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:28:03.858 07:23:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:03.858 07:23:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.858 07:23:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:03.858 [2024-11-27 07:23:14.868815] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:03.858 07:23:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.858 07:23:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:03.858 07:23:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:03.858 07:23:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:03.858 07:23:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:03.858 07:23:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.858 07:23:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:03.858 Malloc0 00:28:03.858 07:23:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.858 07:23:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:03.858 07:23:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.858 07:23:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:03.858 07:23:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.858 07:23:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:03.858 07:23:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.858 07:23:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:03.858 07:23:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.858 07:23:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:03.858 07:23:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.858 07:23:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:03.858 [2024-11-27 07:23:14.985915] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:03.858 07:23:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.858 07:23:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:03.858 07:23:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.858 07:23:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:03.858 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.858 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:03.858 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.858 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:03.858 [ 00:28:03.858 { 00:28:03.858 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:03.858 "subtype": "Discovery", 00:28:03.858 "listen_addresses": [ 00:28:03.858 { 00:28:03.858 "trtype": "TCP", 00:28:03.858 "adrfam": "IPv4", 00:28:03.858 "traddr": "10.0.0.2", 00:28:03.858 "trsvcid": "4420" 00:28:03.858 } 00:28:03.858 ], 00:28:03.858 "allow_any_host": true, 00:28:03.858 "hosts": [] 00:28:03.858 }, 00:28:03.858 { 00:28:03.858 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:03.858 "subtype": "NVMe", 00:28:03.858 "listen_addresses": [ 00:28:03.858 { 00:28:03.858 "trtype": "TCP", 00:28:03.858 "adrfam": "IPv4", 00:28:03.858 "traddr": "10.0.0.2", 00:28:03.858 "trsvcid": "4420" 00:28:03.858 } 00:28:03.858 ], 00:28:03.858 "allow_any_host": true, 00:28:03.858 "hosts": [], 00:28:03.858 "serial_number": "SPDK00000000000001", 00:28:03.858 "model_number": "SPDK bdev Controller", 00:28:03.859 "max_namespaces": 32, 00:28:03.859 "min_cntlid": 1, 00:28:03.859 "max_cntlid": 65519, 00:28:03.859 "namespaces": [ 00:28:03.859 { 00:28:03.859 "nsid": 1, 00:28:03.859 "bdev_name": "Malloc0", 00:28:03.859 "name": "Malloc0", 00:28:03.859 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:03.859 "eui64": "ABCDEF0123456789", 00:28:03.859 "uuid": "823809eb-1a6c-4281-ba91-e1411b64c100" 00:28:03.859 } 00:28:03.859 ] 00:28:03.859 } 00:28:03.859 ] 00:28:03.859 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.859 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:03.859 [2024-11-27 07:23:15.051617] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:28:03.859 [2024-11-27 07:23:15.051688] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2490310 ] 00:28:04.124 [2024-11-27 07:23:15.106759] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:28:04.124 [2024-11-27 07:23:15.106831] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:04.124 [2024-11-27 07:23:15.106837] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:04.124 [2024-11-27 07:23:15.106855] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:04.124 [2024-11-27 07:23:15.106865] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:04.124 [2024-11-27 07:23:15.110605] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:28:04.124 [2024-11-27 07:23:15.110658] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x12a6690 0 00:28:04.124 [2024-11-27 07:23:15.118181] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:04.124 [2024-11-27 07:23:15.118196] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:04.124 [2024-11-27 07:23:15.118202] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:04.124 [2024-11-27 07:23:15.118206] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:04.124 [2024-11-27 07:23:15.118248] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.124 [2024-11-27 07:23:15.118255] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.124 [2024-11-27 07:23:15.118260] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a6690) 00:28:04.124 [2024-11-27 07:23:15.118277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:04.124 [2024-11-27 07:23:15.118300] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1308100, cid 0, qid 0 00:28:04.124 [2024-11-27 07:23:15.126175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.124 [2024-11-27 07:23:15.126187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.124 [2024-11-27 07:23:15.126191] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.124 [2024-11-27 07:23:15.126196] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1308100) on tqpair=0x12a6690 00:28:04.124 [2024-11-27 07:23:15.126210] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:04.124 [2024-11-27 07:23:15.126218] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:28:04.124 [2024-11-27 07:23:15.126225] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:28:04.124 [2024-11-27 07:23:15.126244] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.124 [2024-11-27 07:23:15.126249] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.124 [2024-11-27 07:23:15.126253] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a6690) 00:28:04.124 [2024-11-27 07:23:15.126267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.124 [2024-11-27 07:23:15.126283] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1308100, cid 0, qid 0 00:28:04.124 [2024-11-27 07:23:15.126504] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.124 [2024-11-27 07:23:15.126511] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.124 [2024-11-27 07:23:15.126515] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.124 [2024-11-27 07:23:15.126519] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1308100) on tqpair=0x12a6690 00:28:04.124 [2024-11-27 07:23:15.126529] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:28:04.124 [2024-11-27 07:23:15.126537] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:28:04.124 [2024-11-27 07:23:15.126544] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.124 [2024-11-27 07:23:15.126548] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.124 [2024-11-27 07:23:15.126552] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a6690) 00:28:04.124 [2024-11-27 07:23:15.126559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.124 [2024-11-27 07:23:15.126570] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1308100, cid 0, qid 0 00:28:04.124 [2024-11-27 07:23:15.126752] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.124 [2024-11-27 07:23:15.126759] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.124 [2024-11-27 07:23:15.126763] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.124 [2024-11-27 07:23:15.126767] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1308100) on tqpair=0x12a6690 00:28:04.124 [2024-11-27 07:23:15.126773] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:28:04.124 [2024-11-27 07:23:15.126782] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:28:04.124 [2024-11-27 07:23:15.126789] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.124 [2024-11-27 07:23:15.126793] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.124 [2024-11-27 07:23:15.126796] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a6690) 00:28:04.124 [2024-11-27 07:23:15.126803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.124 [2024-11-27 07:23:15.126813] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1308100, cid 0, qid 0 00:28:04.124 [2024-11-27 07:23:15.127002] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.124 [2024-11-27 07:23:15.127009] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.124 [2024-11-27 07:23:15.127012] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.124 [2024-11-27 07:23:15.127016] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1308100) on tqpair=0x12a6690 00:28:04.124 [2024-11-27 07:23:15.127022] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:04.124 [2024-11-27 07:23:15.127032] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.124 [2024-11-27 07:23:15.127036] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.124 [2024-11-27 07:23:15.127040] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a6690) 00:28:04.124 [2024-11-27 07:23:15.127046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.124 [2024-11-27 07:23:15.127057] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1308100, cid 0, qid 0 00:28:04.124 [2024-11-27 07:23:15.127248] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.124 [2024-11-27 07:23:15.127256] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.124 [2024-11-27 07:23:15.127259] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.124 [2024-11-27 07:23:15.127263] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1308100) on tqpair=0x12a6690 00:28:04.124 [2024-11-27 07:23:15.127268] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:28:04.124 [2024-11-27 07:23:15.127274] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:28:04.124 [2024-11-27 07:23:15.127282] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:04.124 [2024-11-27 07:23:15.127394] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:28:04.124 [2024-11-27 07:23:15.127399] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:04.124 [2024-11-27 07:23:15.127410] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.124 [2024-11-27 07:23:15.127414] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.124 [2024-11-27 07:23:15.127417] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a6690) 00:28:04.124 [2024-11-27 07:23:15.127424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.124 [2024-11-27 07:23:15.127435] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1308100, cid 0, qid 0 00:28:04.124 [2024-11-27 07:23:15.127627] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.124 [2024-11-27 07:23:15.127633] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.124 [2024-11-27 07:23:15.127637] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.125 [2024-11-27 07:23:15.127641] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1308100) on tqpair=0x12a6690 00:28:04.125 [2024-11-27 07:23:15.127646] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:04.125 [2024-11-27 07:23:15.127656] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.125 [2024-11-27 07:23:15.127660] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.125 [2024-11-27 07:23:15.127664] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a6690) 00:28:04.125 [2024-11-27 07:23:15.127670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.125 [2024-11-27 07:23:15.127681] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1308100, cid 0, qid 0 00:28:04.125 [2024-11-27 07:23:15.127899] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.125 [2024-11-27 07:23:15.127905] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.125 [2024-11-27 07:23:15.127909] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.125 [2024-11-27 07:23:15.127913] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1308100) on tqpair=0x12a6690 00:28:04.125 [2024-11-27 07:23:15.127917] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:04.125 [2024-11-27 07:23:15.127922] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:28:04.125 [2024-11-27 07:23:15.127931] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:28:04.125 [2024-11-27 07:23:15.127946] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:28:04.125 [2024-11-27 07:23:15.127958] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.125 [2024-11-27 07:23:15.127962] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a6690) 00:28:04.125 [2024-11-27 07:23:15.127970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.125 [2024-11-27 07:23:15.127980] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1308100, cid 0, qid 0 00:28:04.125 [2024-11-27 07:23:15.128237] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:04.125 [2024-11-27 07:23:15.128244] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:04.125 [2024-11-27 07:23:15.128248] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:04.125 [2024-11-27 07:23:15.128253] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12a6690): datao=0, datal=4096, cccid=0 00:28:04.125 [2024-11-27 07:23:15.128258] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1308100) on tqpair(0x12a6690): expected_datao=0, payload_size=4096 00:28:04.125 [2024-11-27 07:23:15.128263] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.125 [2024-11-27 07:23:15.128272] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:04.125 [2024-11-27 07:23:15.128276] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:04.125 [2024-11-27 07:23:15.128433] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.125 [2024-11-27 07:23:15.128439] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.125 [2024-11-27 07:23:15.128443] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.125 [2024-11-27 07:23:15.128447] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1308100) on tqpair=0x12a6690 00:28:04.125 [2024-11-27 07:23:15.128455] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:28:04.125 [2024-11-27 07:23:15.128460] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:28:04.125 [2024-11-27 07:23:15.128465] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:28:04.125 [2024-11-27 07:23:15.128471] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:28:04.125 [2024-11-27 07:23:15.128476] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:28:04.125 [2024-11-27 07:23:15.128481] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:28:04.125 [2024-11-27 07:23:15.128492] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:28:04.125 [2024-11-27 07:23:15.128500] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.125 [2024-11-27 07:23:15.128504] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.125 [2024-11-27 07:23:15.128508] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a6690) 00:28:04.125 [2024-11-27 07:23:15.128515] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:04.125 [2024-11-27 07:23:15.128527] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1308100, cid 0, qid 0 00:28:04.125 [2024-11-27 07:23:15.128742] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.125 [2024-11-27 07:23:15.128748] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.125 [2024-11-27 07:23:15.128752] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.125 [2024-11-27 07:23:15.128756] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1308100) on tqpair=0x12a6690 00:28:04.125 [2024-11-27 07:23:15.128768] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.125 [2024-11-27 07:23:15.128772] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.125 [2024-11-27 07:23:15.128775] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a6690) 00:28:04.125 [2024-11-27 07:23:15.128781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.125 [2024-11-27 07:23:15.128788] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.125 [2024-11-27 07:23:15.128792] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.125 [2024-11-27 07:23:15.128796] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x12a6690) 00:28:04.125 [2024-11-27 07:23:15.128802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.125 [2024-11-27 07:23:15.128808] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.125 [2024-11-27 07:23:15.128812] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.125 [2024-11-27 07:23:15.128815] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x12a6690) 00:28:04.125 [2024-11-27 07:23:15.128821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.125 [2024-11-27 07:23:15.128828] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.125 [2024-11-27 07:23:15.128832] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.125 [2024-11-27 07:23:15.128835] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a6690) 00:28:04.125 [2024-11-27 07:23:15.128841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.125 [2024-11-27 07:23:15.128847] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:04.125 [2024-11-27 07:23:15.128859] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:04.125 [2024-11-27 07:23:15.128866] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.125 [2024-11-27 07:23:15.128870] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12a6690) 00:28:04.125 [2024-11-27 07:23:15.128877] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.125 [2024-11-27 07:23:15.128889] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1308100, cid 0, qid 0 00:28:04.125 [2024-11-27 07:23:15.128895] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1308280, cid 1, qid 0 00:28:04.125 [2024-11-27 07:23:15.128900] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1308400, cid 2, qid 0 00:28:04.125 [2024-11-27 07:23:15.128904] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1308580, cid 3, qid 0 00:28:04.125 [2024-11-27 07:23:15.128909] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1308700, cid 4, qid 0 00:28:04.125 [2024-11-27 07:23:15.129170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.125 [2024-11-27 07:23:15.129177] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.125 [2024-11-27 07:23:15.129180] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.125 [2024-11-27 07:23:15.129184] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1308700) on tqpair=0x12a6690 00:28:04.125 [2024-11-27 07:23:15.129190] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:28:04.125 [2024-11-27 07:23:15.129195] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:28:04.125 [2024-11-27 07:23:15.129206] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.125 [2024-11-27 07:23:15.129213] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12a6690) 00:28:04.125 [2024-11-27 07:23:15.129219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.125 [2024-11-27 07:23:15.129230] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1308700, cid 4, qid 0 00:28:04.125 [2024-11-27 07:23:15.129418] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:04.125 [2024-11-27 07:23:15.129424] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:04.125 [2024-11-27 07:23:15.129428] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:04.125 [2024-11-27 07:23:15.129431] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12a6690): datao=0, datal=4096, cccid=4 00:28:04.125 [2024-11-27 07:23:15.129436] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1308700) on tqpair(0x12a6690): expected_datao=0, payload_size=4096 00:28:04.125 [2024-11-27 07:23:15.129440] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.125 [2024-11-27 07:23:15.129456] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:04.125 [2024-11-27 07:23:15.129460] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:04.125 [2024-11-27 07:23:15.129608] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.125 [2024-11-27 07:23:15.129614] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.125 [2024-11-27 07:23:15.129618] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.125 [2024-11-27 07:23:15.129622] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1308700) on tqpair=0x12a6690 00:28:04.125 [2024-11-27 07:23:15.129637] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:28:04.125 [2024-11-27 07:23:15.129666] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.125 [2024-11-27 07:23:15.129671] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12a6690) 00:28:04.126 [2024-11-27 07:23:15.129678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.126 [2024-11-27 07:23:15.129685] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.126 [2024-11-27 07:23:15.129689] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.126 [2024-11-27 07:23:15.129693] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12a6690) 00:28:04.126 [2024-11-27 07:23:15.129699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.126 [2024-11-27 07:23:15.129713] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1308700, cid 4, qid 0 00:28:04.126 [2024-11-27 07:23:15.129719] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1308880, cid 5, qid 0 00:28:04.126 [2024-11-27 07:23:15.129964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:04.126 [2024-11-27 07:23:15.129971] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:04.126 [2024-11-27 07:23:15.129974] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:04.126 [2024-11-27 07:23:15.129978] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12a6690): datao=0, datal=1024, cccid=4 00:28:04.126 [2024-11-27 07:23:15.129982] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1308700) on tqpair(0x12a6690): expected_datao=0, payload_size=1024 00:28:04.126 [2024-11-27 07:23:15.129987] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.126 [2024-11-27 07:23:15.129994] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:04.126 [2024-11-27 07:23:15.129997] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:04.126 [2024-11-27 07:23:15.130003] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.126 [2024-11-27 07:23:15.130009] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.126 [2024-11-27 07:23:15.130012] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.126 [2024-11-27 07:23:15.130019] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1308880) on tqpair=0x12a6690 00:28:04.126 [2024-11-27 07:23:15.174178] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.126 [2024-11-27 07:23:15.174192] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.126 [2024-11-27 07:23:15.174197] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.126 [2024-11-27 07:23:15.174201] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1308700) on tqpair=0x12a6690 00:28:04.126 [2024-11-27 07:23:15.174215] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.126 [2024-11-27 07:23:15.174220] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12a6690) 00:28:04.126 [2024-11-27 07:23:15.174227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.126 [2024-11-27 07:23:15.174245] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1308700, cid 4, qid 0 00:28:04.126 [2024-11-27 07:23:15.174462] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:04.126 [2024-11-27 07:23:15.174469] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:04.126 [2024-11-27 07:23:15.174473] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:04.126 [2024-11-27 07:23:15.174477] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12a6690): datao=0, datal=3072, cccid=4 00:28:04.126 [2024-11-27 07:23:15.174482] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1308700) on tqpair(0x12a6690): expected_datao=0, payload_size=3072 00:28:04.126 [2024-11-27 07:23:15.174487] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.126 [2024-11-27 07:23:15.174494] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:04.126 [2024-11-27 07:23:15.174499] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:04.126 [2024-11-27 07:23:15.174668] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.126 [2024-11-27 07:23:15.174675] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.126 [2024-11-27 07:23:15.174680] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.126 [2024-11-27 07:23:15.174684] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1308700) on tqpair=0x12a6690 00:28:04.126 [2024-11-27 07:23:15.174693] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.126 [2024-11-27 07:23:15.174697] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12a6690) 00:28:04.126 [2024-11-27 07:23:15.174704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.126 [2024-11-27 07:23:15.174718] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1308700, cid 4, qid 0 00:28:04.126 [2024-11-27 07:23:15.174923] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:04.126 [2024-11-27 07:23:15.174930] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:04.126 [2024-11-27 07:23:15.174934] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:04.126 [2024-11-27 07:23:15.174938] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12a6690): datao=0, datal=8, cccid=4 00:28:04.126 [2024-11-27 07:23:15.174943] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1308700) on tqpair(0x12a6690): expected_datao=0, payload_size=8 00:28:04.126 [2024-11-27 07:23:15.174947] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.126 [2024-11-27 07:23:15.174954] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:04.126 [2024-11-27 07:23:15.174958] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:04.126 [2024-11-27 07:23:15.215335] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.126 [2024-11-27 07:23:15.215347] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.126 [2024-11-27 07:23:15.215351] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.126 [2024-11-27 07:23:15.215355] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1308700) on tqpair=0x12a6690 00:28:04.126 ===================================================== 00:28:04.126 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:04.126 ===================================================== 00:28:04.126 Controller Capabilities/Features 00:28:04.126 ================================ 00:28:04.126 Vendor ID: 0000 00:28:04.126 Subsystem Vendor ID: 0000 00:28:04.126 Serial Number: .................... 00:28:04.126 Model Number: ........................................ 00:28:04.126 Firmware Version: 25.01 00:28:04.126 Recommended Arb Burst: 0 00:28:04.126 IEEE OUI Identifier: 00 00 00 00:28:04.126 Multi-path I/O 00:28:04.126 May have multiple subsystem ports: No 00:28:04.126 May have multiple controllers: No 00:28:04.126 Associated with SR-IOV VF: No 00:28:04.126 Max Data Transfer Size: 131072 00:28:04.126 Max Number of Namespaces: 0 00:28:04.126 Max Number of I/O Queues: 1024 00:28:04.126 NVMe Specification Version (VS): 1.3 00:28:04.126 NVMe Specification Version (Identify): 1.3 00:28:04.126 Maximum Queue Entries: 128 00:28:04.126 Contiguous Queues Required: Yes 00:28:04.126 Arbitration Mechanisms Supported 00:28:04.126 Weighted Round Robin: Not Supported 00:28:04.126 Vendor Specific: Not Supported 00:28:04.126 Reset Timeout: 15000 ms 00:28:04.126 Doorbell Stride: 4 bytes 00:28:04.126 NVM Subsystem Reset: Not Supported 00:28:04.126 Command Sets Supported 00:28:04.126 NVM Command Set: Supported 00:28:04.126 Boot Partition: Not Supported 00:28:04.126 Memory Page Size Minimum: 4096 bytes 00:28:04.126 Memory Page Size Maximum: 4096 bytes 00:28:04.126 Persistent Memory Region: Not Supported 00:28:04.126 Optional Asynchronous Events Supported 00:28:04.126 Namespace Attribute Notices: Not Supported 00:28:04.126 Firmware Activation Notices: Not Supported 00:28:04.126 ANA Change Notices: Not Supported 00:28:04.126 PLE Aggregate Log Change Notices: Not Supported 00:28:04.126 LBA Status Info Alert Notices: Not Supported 00:28:04.126 EGE Aggregate Log Change Notices: Not Supported 00:28:04.126 Normal NVM Subsystem Shutdown event: Not Supported 00:28:04.126 Zone Descriptor Change Notices: Not Supported 00:28:04.126 Discovery Log Change Notices: Supported 00:28:04.126 Controller Attributes 00:28:04.126 128-bit Host Identifier: Not Supported 00:28:04.126 Non-Operational Permissive Mode: Not Supported 00:28:04.126 NVM Sets: Not Supported 00:28:04.126 Read Recovery Levels: Not Supported 00:28:04.126 Endurance Groups: Not Supported 00:28:04.126 Predictable Latency Mode: Not Supported 00:28:04.126 Traffic Based Keep ALive: Not Supported 00:28:04.126 Namespace Granularity: Not Supported 00:28:04.126 SQ Associations: Not Supported 00:28:04.126 UUID List: Not Supported 00:28:04.126 Multi-Domain Subsystem: Not Supported 00:28:04.126 Fixed Capacity Management: Not Supported 00:28:04.126 Variable Capacity Management: Not Supported 00:28:04.126 Delete Endurance Group: Not Supported 00:28:04.126 Delete NVM Set: Not Supported 00:28:04.126 Extended LBA Formats Supported: Not Supported 00:28:04.126 Flexible Data Placement Supported: Not Supported 00:28:04.126 00:28:04.126 Controller Memory Buffer Support 00:28:04.126 ================================ 00:28:04.126 Supported: No 00:28:04.126 00:28:04.126 Persistent Memory Region Support 00:28:04.126 ================================ 00:28:04.126 Supported: No 00:28:04.126 00:28:04.126 Admin Command Set Attributes 00:28:04.126 ============================ 00:28:04.126 Security Send/Receive: Not Supported 00:28:04.126 Format NVM: Not Supported 00:28:04.126 Firmware Activate/Download: Not Supported 00:28:04.126 Namespace Management: Not Supported 00:28:04.126 Device Self-Test: Not Supported 00:28:04.126 Directives: Not Supported 00:28:04.126 NVMe-MI: Not Supported 00:28:04.126 Virtualization Management: Not Supported 00:28:04.126 Doorbell Buffer Config: Not Supported 00:28:04.126 Get LBA Status Capability: Not Supported 00:28:04.126 Command & Feature Lockdown Capability: Not Supported 00:28:04.126 Abort Command Limit: 1 00:28:04.126 Async Event Request Limit: 4 00:28:04.126 Number of Firmware Slots: N/A 00:28:04.126 Firmware Slot 1 Read-Only: N/A 00:28:04.126 Firmware Activation Without Reset: N/A 00:28:04.127 Multiple Update Detection Support: N/A 00:28:04.127 Firmware Update Granularity: No Information Provided 00:28:04.127 Per-Namespace SMART Log: No 00:28:04.127 Asymmetric Namespace Access Log Page: Not Supported 00:28:04.127 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:04.127 Command Effects Log Page: Not Supported 00:28:04.127 Get Log Page Extended Data: Supported 00:28:04.127 Telemetry Log Pages: Not Supported 00:28:04.127 Persistent Event Log Pages: Not Supported 00:28:04.127 Supported Log Pages Log Page: May Support 00:28:04.127 Commands Supported & Effects Log Page: Not Supported 00:28:04.127 Feature Identifiers & Effects Log Page:May Support 00:28:04.127 NVMe-MI Commands & Effects Log Page: May Support 00:28:04.127 Data Area 4 for Telemetry Log: Not Supported 00:28:04.127 Error Log Page Entries Supported: 128 00:28:04.127 Keep Alive: Not Supported 00:28:04.127 00:28:04.127 NVM Command Set Attributes 00:28:04.127 ========================== 00:28:04.127 Submission Queue Entry Size 00:28:04.127 Max: 1 00:28:04.127 Min: 1 00:28:04.127 Completion Queue Entry Size 00:28:04.127 Max: 1 00:28:04.127 Min: 1 00:28:04.127 Number of Namespaces: 0 00:28:04.127 Compare Command: Not Supported 00:28:04.127 Write Uncorrectable Command: Not Supported 00:28:04.127 Dataset Management Command: Not Supported 00:28:04.127 Write Zeroes Command: Not Supported 00:28:04.127 Set Features Save Field: Not Supported 00:28:04.127 Reservations: Not Supported 00:28:04.127 Timestamp: Not Supported 00:28:04.127 Copy: Not Supported 00:28:04.127 Volatile Write Cache: Not Present 00:28:04.127 Atomic Write Unit (Normal): 1 00:28:04.127 Atomic Write Unit (PFail): 1 00:28:04.127 Atomic Compare & Write Unit: 1 00:28:04.127 Fused Compare & Write: Supported 00:28:04.127 Scatter-Gather List 00:28:04.127 SGL Command Set: Supported 00:28:04.127 SGL Keyed: Supported 00:28:04.127 SGL Bit Bucket Descriptor: Not Supported 00:28:04.127 SGL Metadata Pointer: Not Supported 00:28:04.127 Oversized SGL: Not Supported 00:28:04.127 SGL Metadata Address: Not Supported 00:28:04.127 SGL Offset: Supported 00:28:04.127 Transport SGL Data Block: Not Supported 00:28:04.127 Replay Protected Memory Block: Not Supported 00:28:04.127 00:28:04.127 Firmware Slot Information 00:28:04.127 ========================= 00:28:04.127 Active slot: 0 00:28:04.127 00:28:04.127 00:28:04.127 Error Log 00:28:04.127 ========= 00:28:04.127 00:28:04.127 Active Namespaces 00:28:04.127 ================= 00:28:04.127 Discovery Log Page 00:28:04.127 ================== 00:28:04.127 Generation Counter: 2 00:28:04.127 Number of Records: 2 00:28:04.127 Record Format: 0 00:28:04.127 00:28:04.127 Discovery Log Entry 0 00:28:04.127 ---------------------- 00:28:04.127 Transport Type: 3 (TCP) 00:28:04.127 Address Family: 1 (IPv4) 00:28:04.127 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:04.127 Entry Flags: 00:28:04.127 Duplicate Returned Information: 1 00:28:04.127 Explicit Persistent Connection Support for Discovery: 1 00:28:04.127 Transport Requirements: 00:28:04.127 Secure Channel: Not Required 00:28:04.127 Port ID: 0 (0x0000) 00:28:04.127 Controller ID: 65535 (0xffff) 00:28:04.127 Admin Max SQ Size: 128 00:28:04.127 Transport Service Identifier: 4420 00:28:04.127 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:04.127 Transport Address: 10.0.0.2 00:28:04.127 Discovery Log Entry 1 00:28:04.127 ---------------------- 00:28:04.127 Transport Type: 3 (TCP) 00:28:04.127 Address Family: 1 (IPv4) 00:28:04.127 Subsystem Type: 2 (NVM Subsystem) 00:28:04.127 Entry Flags: 00:28:04.127 Duplicate Returned Information: 0 00:28:04.127 Explicit Persistent Connection Support for Discovery: 0 00:28:04.127 Transport Requirements: 00:28:04.127 Secure Channel: Not Required 00:28:04.127 Port ID: 0 (0x0000) 00:28:04.127 Controller ID: 65535 (0xffff) 00:28:04.127 Admin Max SQ Size: 128 00:28:04.127 Transport Service Identifier: 4420 00:28:04.127 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:04.127 Transport Address: 10.0.0.2 [2024-11-27 07:23:15.215465] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:28:04.127 [2024-11-27 07:23:15.215477] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1308100) on tqpair=0x12a6690 00:28:04.127 [2024-11-27 07:23:15.215485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.127 [2024-11-27 07:23:15.215490] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1308280) on tqpair=0x12a6690 00:28:04.127 [2024-11-27 07:23:15.215495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.127 [2024-11-27 07:23:15.215500] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1308400) on tqpair=0x12a6690 00:28:04.127 [2024-11-27 07:23:15.215505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.127 [2024-11-27 07:23:15.215510] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1308580) on tqpair=0x12a6690 00:28:04.127 [2024-11-27 07:23:15.215515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.127 [2024-11-27 07:23:15.215525] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.127 [2024-11-27 07:23:15.215529] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.127 [2024-11-27 07:23:15.215533] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a6690) 00:28:04.127 [2024-11-27 07:23:15.215542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.127 [2024-11-27 07:23:15.215560] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1308580, cid 3, qid 0 00:28:04.127 [2024-11-27 07:23:15.215834] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.127 [2024-11-27 07:23:15.215840] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.127 [2024-11-27 07:23:15.215844] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.127 [2024-11-27 07:23:15.215848] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1308580) on tqpair=0x12a6690 00:28:04.127 [2024-11-27 07:23:15.215856] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.127 [2024-11-27 07:23:15.215859] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.127 [2024-11-27 07:23:15.215863] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a6690) 00:28:04.127 [2024-11-27 07:23:15.215870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.127 [2024-11-27 07:23:15.215883] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1308580, cid 3, qid 0 00:28:04.127 [2024-11-27 07:23:15.216103] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.127 [2024-11-27 07:23:15.216109] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.127 [2024-11-27 07:23:15.216113] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.127 [2024-11-27 07:23:15.216117] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1308580) on tqpair=0x12a6690 00:28:04.127 [2024-11-27 07:23:15.216122] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:28:04.127 [2024-11-27 07:23:15.216128] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:28:04.127 [2024-11-27 07:23:15.216138] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.127 [2024-11-27 07:23:15.216142] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.127 [2024-11-27 07:23:15.216145] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a6690) 00:28:04.127 [2024-11-27 07:23:15.216152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.127 [2024-11-27 07:23:15.216177] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1308580, cid 3, qid 0 00:28:04.127 [2024-11-27 07:23:15.216381] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.127 [2024-11-27 07:23:15.216389] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.127 [2024-11-27 07:23:15.216393] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.127 [2024-11-27 07:23:15.216396] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1308580) on tqpair=0x12a6690 00:28:04.127 [2024-11-27 07:23:15.216407] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.127 [2024-11-27 07:23:15.216411] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.127 [2024-11-27 07:23:15.216415] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a6690) 00:28:04.127 [2024-11-27 07:23:15.216422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.127 [2024-11-27 07:23:15.216433] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1308580, cid 3, qid 0 00:28:04.127 [2024-11-27 07:23:15.216613] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.127 [2024-11-27 07:23:15.216620] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.127 [2024-11-27 07:23:15.216623] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.127 [2024-11-27 07:23:15.216627] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1308580) on tqpair=0x12a6690 00:28:04.127 [2024-11-27 07:23:15.216637] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.127 [2024-11-27 07:23:15.216641] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.127 [2024-11-27 07:23:15.216645] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a6690) 00:28:04.127 [2024-11-27 07:23:15.216652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.128 [2024-11-27 07:23:15.216662] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1308580, cid 3, qid 0 00:28:04.128 [2024-11-27 07:23:15.216833] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.128 [2024-11-27 07:23:15.216839] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.128 [2024-11-27 07:23:15.216843] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.128 [2024-11-27 07:23:15.216846] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1308580) on tqpair=0x12a6690 00:28:04.128 [2024-11-27 07:23:15.216856] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.128 [2024-11-27 07:23:15.216860] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.128 [2024-11-27 07:23:15.216864] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a6690) 00:28:04.128 [2024-11-27 07:23:15.216871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.128 [2024-11-27 07:23:15.216881] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1308580, cid 3, qid 0 00:28:04.128 [2024-11-27 07:23:15.217048] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.128 [2024-11-27 07:23:15.217054] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.128 [2024-11-27 07:23:15.217058] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.128 [2024-11-27 07:23:15.217062] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1308580) on tqpair=0x12a6690 00:28:04.128 [2024-11-27 07:23:15.217072] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.128 [2024-11-27 07:23:15.217076] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.128 [2024-11-27 07:23:15.217080] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a6690) 00:28:04.128 [2024-11-27 07:23:15.217087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.128 [2024-11-27 07:23:15.217098] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1308580, cid 3, qid 0 00:28:04.128 [2024-11-27 07:23:15.217342] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.128 [2024-11-27 07:23:15.217348] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.128 [2024-11-27 07:23:15.217352] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.128 [2024-11-27 07:23:15.217356] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1308580) on tqpair=0x12a6690 00:28:04.128 [2024-11-27 07:23:15.217366] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.128 [2024-11-27 07:23:15.217370] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.128 [2024-11-27 07:23:15.217374] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a6690) 00:28:04.128 [2024-11-27 07:23:15.217381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.128 [2024-11-27 07:23:15.217392] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1308580, cid 3, qid 0 00:28:04.128 [2024-11-27 07:23:15.217566] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.128 [2024-11-27 07:23:15.217572] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.128 [2024-11-27 07:23:15.217576] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.128 [2024-11-27 07:23:15.217580] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1308580) on tqpair=0x12a6690 00:28:04.128 [2024-11-27 07:23:15.217590] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.128 [2024-11-27 07:23:15.217594] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.128 [2024-11-27 07:23:15.217597] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a6690) 00:28:04.128 [2024-11-27 07:23:15.217604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.128 [2024-11-27 07:23:15.217614] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1308580, cid 3, qid 0 00:28:04.128 [2024-11-27 07:23:15.217785] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.128 [2024-11-27 07:23:15.217791] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.128 [2024-11-27 07:23:15.217795] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.128 [2024-11-27 07:23:15.217799] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1308580) on tqpair=0x12a6690 00:28:04.128 [2024-11-27 07:23:15.217808] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.128 [2024-11-27 07:23:15.217812] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.128 [2024-11-27 07:23:15.217816] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a6690) 00:28:04.128 [2024-11-27 07:23:15.217823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.128 [2024-11-27 07:23:15.217833] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1308580, cid 3, qid 0 00:28:04.128 [2024-11-27 07:23:15.218004] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.128 [2024-11-27 07:23:15.218011] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.128 [2024-11-27 07:23:15.218014] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.128 [2024-11-27 07:23:15.218018] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1308580) on tqpair=0x12a6690 00:28:04.128 [2024-11-27 07:23:15.218028] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.128 [2024-11-27 07:23:15.218032] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.128 [2024-11-27 07:23:15.218035] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a6690) 00:28:04.128 [2024-11-27 07:23:15.218042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.128 [2024-11-27 07:23:15.218052] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1308580, cid 3, qid 0 00:28:04.128 [2024-11-27 07:23:15.222172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.128 [2024-11-27 07:23:15.222187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.128 [2024-11-27 07:23:15.222190] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.128 [2024-11-27 07:23:15.222194] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1308580) on tqpair=0x12a6690 00:28:04.128 [2024-11-27 07:23:15.222203] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:28:04.128 00:28:04.128 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:04.128 [2024-11-27 07:23:15.267752] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:28:04.128 [2024-11-27 07:23:15.267804] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2490394 ] 00:28:04.128 [2024-11-27 07:23:15.321313] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:28:04.128 [2024-11-27 07:23:15.321376] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:04.128 [2024-11-27 07:23:15.321382] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:04.128 [2024-11-27 07:23:15.321401] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:04.128 [2024-11-27 07:23:15.321411] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:04.395 [2024-11-27 07:23:15.325466] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:28:04.396 [2024-11-27 07:23:15.325512] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1027690 0 00:28:04.396 [2024-11-27 07:23:15.333173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:04.396 [2024-11-27 07:23:15.333189] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:04.396 [2024-11-27 07:23:15.333193] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:04.396 [2024-11-27 07:23:15.333197] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:04.396 [2024-11-27 07:23:15.333236] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.396 [2024-11-27 07:23:15.333244] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.396 [2024-11-27 07:23:15.333248] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1027690) 00:28:04.396 [2024-11-27 07:23:15.333263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:04.396 [2024-11-27 07:23:15.333286] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089100, cid 0, qid 0 00:28:04.396 [2024-11-27 07:23:15.340176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.396 [2024-11-27 07:23:15.340187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.396 [2024-11-27 07:23:15.340191] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.396 [2024-11-27 07:23:15.340196] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089100) on tqpair=0x1027690 00:28:04.396 [2024-11-27 07:23:15.340206] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:04.396 [2024-11-27 07:23:15.340213] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:28:04.396 [2024-11-27 07:23:15.340219] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:28:04.396 [2024-11-27 07:23:15.340240] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.396 [2024-11-27 07:23:15.340245] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.396 [2024-11-27 07:23:15.340249] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1027690) 00:28:04.396 [2024-11-27 07:23:15.340257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.396 [2024-11-27 07:23:15.340273] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089100, cid 0, qid 0 00:28:04.396 [2024-11-27 07:23:15.340506] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.396 [2024-11-27 07:23:15.340513] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.396 [2024-11-27 07:23:15.340516] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.396 [2024-11-27 07:23:15.340520] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089100) on tqpair=0x1027690 00:28:04.396 [2024-11-27 07:23:15.340529] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:28:04.396 [2024-11-27 07:23:15.340537] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:28:04.396 [2024-11-27 07:23:15.340544] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.396 [2024-11-27 07:23:15.340548] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.396 [2024-11-27 07:23:15.340552] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1027690) 00:28:04.396 [2024-11-27 07:23:15.340559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.396 [2024-11-27 07:23:15.340570] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089100, cid 0, qid 0 00:28:04.396 [2024-11-27 07:23:15.340748] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.396 [2024-11-27 07:23:15.340754] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.396 [2024-11-27 07:23:15.340758] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.396 [2024-11-27 07:23:15.340762] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089100) on tqpair=0x1027690 00:28:04.396 [2024-11-27 07:23:15.340767] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:28:04.396 [2024-11-27 07:23:15.340776] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:28:04.396 [2024-11-27 07:23:15.340782] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.396 [2024-11-27 07:23:15.340786] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.396 [2024-11-27 07:23:15.340790] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1027690) 00:28:04.396 [2024-11-27 07:23:15.340797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.396 [2024-11-27 07:23:15.340807] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089100, cid 0, qid 0 00:28:04.396 [2024-11-27 07:23:15.340992] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.396 [2024-11-27 07:23:15.340998] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.396 [2024-11-27 07:23:15.341001] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.396 [2024-11-27 07:23:15.341005] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089100) on tqpair=0x1027690 00:28:04.396 [2024-11-27 07:23:15.341010] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:04.396 [2024-11-27 07:23:15.341020] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.396 [2024-11-27 07:23:15.341024] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.396 [2024-11-27 07:23:15.341028] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1027690) 00:28:04.396 [2024-11-27 07:23:15.341037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.396 [2024-11-27 07:23:15.341048] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089100, cid 0, qid 0 00:28:04.396 [2024-11-27 07:23:15.341256] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.396 [2024-11-27 07:23:15.341263] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.396 [2024-11-27 07:23:15.341267] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.396 [2024-11-27 07:23:15.341271] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089100) on tqpair=0x1027690 00:28:04.396 [2024-11-27 07:23:15.341275] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:28:04.396 [2024-11-27 07:23:15.341280] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:28:04.396 [2024-11-27 07:23:15.341288] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:04.396 [2024-11-27 07:23:15.341397] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:28:04.396 [2024-11-27 07:23:15.341402] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:04.396 [2024-11-27 07:23:15.341411] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.396 [2024-11-27 07:23:15.341415] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.396 [2024-11-27 07:23:15.341418] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1027690) 00:28:04.396 [2024-11-27 07:23:15.341425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.396 [2024-11-27 07:23:15.341436] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089100, cid 0, qid 0 00:28:04.396 [2024-11-27 07:23:15.341615] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.396 [2024-11-27 07:23:15.341622] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.396 [2024-11-27 07:23:15.341625] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.396 [2024-11-27 07:23:15.341629] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089100) on tqpair=0x1027690 00:28:04.396 [2024-11-27 07:23:15.341634] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:04.396 [2024-11-27 07:23:15.341644] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.396 [2024-11-27 07:23:15.341648] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.396 [2024-11-27 07:23:15.341652] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1027690) 00:28:04.396 [2024-11-27 07:23:15.341658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.396 [2024-11-27 07:23:15.341669] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089100, cid 0, qid 0 00:28:04.396 [2024-11-27 07:23:15.341877] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.396 [2024-11-27 07:23:15.341884] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.396 [2024-11-27 07:23:15.341887] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.396 [2024-11-27 07:23:15.341891] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089100) on tqpair=0x1027690 00:28:04.396 [2024-11-27 07:23:15.341895] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:04.396 [2024-11-27 07:23:15.341900] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:28:04.396 [2024-11-27 07:23:15.341908] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:28:04.396 [2024-11-27 07:23:15.341926] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:28:04.396 [2024-11-27 07:23:15.341935] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.396 [2024-11-27 07:23:15.341939] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1027690) 00:28:04.396 [2024-11-27 07:23:15.341946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.396 [2024-11-27 07:23:15.341957] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089100, cid 0, qid 0 00:28:04.396 [2024-11-27 07:23:15.342203] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:04.396 [2024-11-27 07:23:15.342211] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:04.396 [2024-11-27 07:23:15.342214] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:04.396 [2024-11-27 07:23:15.342218] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1027690): datao=0, datal=4096, cccid=0 00:28:04.396 [2024-11-27 07:23:15.342225] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1089100) on tqpair(0x1027690): expected_datao=0, payload_size=4096 00:28:04.396 [2024-11-27 07:23:15.342230] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.396 [2024-11-27 07:23:15.342244] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:04.396 [2024-11-27 07:23:15.342249] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:04.397 [2024-11-27 07:23:15.385171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.397 [2024-11-27 07:23:15.385186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.397 [2024-11-27 07:23:15.385190] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.397 [2024-11-27 07:23:15.385194] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089100) on tqpair=0x1027690 00:28:04.397 [2024-11-27 07:23:15.385204] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:28:04.397 [2024-11-27 07:23:15.385209] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:28:04.397 [2024-11-27 07:23:15.385214] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:28:04.397 [2024-11-27 07:23:15.385219] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:28:04.397 [2024-11-27 07:23:15.385224] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:28:04.397 [2024-11-27 07:23:15.385229] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:28:04.397 [2024-11-27 07:23:15.385239] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:28:04.397 [2024-11-27 07:23:15.385247] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.397 [2024-11-27 07:23:15.385252] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.397 [2024-11-27 07:23:15.385255] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1027690) 00:28:04.397 [2024-11-27 07:23:15.385264] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:04.397 [2024-11-27 07:23:15.385279] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089100, cid 0, qid 0 00:28:04.397 [2024-11-27 07:23:15.385456] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.397 [2024-11-27 07:23:15.385462] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.397 [2024-11-27 07:23:15.385465] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.397 [2024-11-27 07:23:15.385469] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089100) on tqpair=0x1027690 00:28:04.397 [2024-11-27 07:23:15.385482] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.397 [2024-11-27 07:23:15.385486] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.397 [2024-11-27 07:23:15.385490] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1027690) 00:28:04.397 [2024-11-27 07:23:15.385496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.397 [2024-11-27 07:23:15.385503] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.397 [2024-11-27 07:23:15.385507] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.397 [2024-11-27 07:23:15.385510] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1027690) 00:28:04.397 [2024-11-27 07:23:15.385516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.397 [2024-11-27 07:23:15.385523] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.397 [2024-11-27 07:23:15.385527] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.397 [2024-11-27 07:23:15.385530] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1027690) 00:28:04.397 [2024-11-27 07:23:15.385536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.397 [2024-11-27 07:23:15.385542] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.397 [2024-11-27 07:23:15.385546] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.397 [2024-11-27 07:23:15.385550] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:28:04.397 [2024-11-27 07:23:15.385556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.397 [2024-11-27 07:23:15.385561] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:04.397 [2024-11-27 07:23:15.385580] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:04.397 [2024-11-27 07:23:15.385587] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.397 [2024-11-27 07:23:15.385590] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1027690) 00:28:04.397 [2024-11-27 07:23:15.385597] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.397 [2024-11-27 07:23:15.385610] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089100, cid 0, qid 0 00:28:04.397 [2024-11-27 07:23:15.385615] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089280, cid 1, qid 0 00:28:04.397 [2024-11-27 07:23:15.385620] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089400, cid 2, qid 0 00:28:04.397 [2024-11-27 07:23:15.385625] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:28:04.397 [2024-11-27 07:23:15.385630] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089700, cid 4, qid 0 00:28:04.397 [2024-11-27 07:23:15.385884] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.397 [2024-11-27 07:23:15.385891] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.397 [2024-11-27 07:23:15.385894] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.397 [2024-11-27 07:23:15.385898] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089700) on tqpair=0x1027690 00:28:04.397 [2024-11-27 07:23:15.385903] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:28:04.397 [2024-11-27 07:23:15.385909] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:04.397 [2024-11-27 07:23:15.385920] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:28:04.397 [2024-11-27 07:23:15.385930] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:04.397 [2024-11-27 07:23:15.385936] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.397 [2024-11-27 07:23:15.385940] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.397 [2024-11-27 07:23:15.385944] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1027690) 00:28:04.397 [2024-11-27 07:23:15.385950] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:04.397 [2024-11-27 07:23:15.385961] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089700, cid 4, qid 0 00:28:04.397 [2024-11-27 07:23:15.386152] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.397 [2024-11-27 07:23:15.386169] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.397 [2024-11-27 07:23:15.386173] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.397 [2024-11-27 07:23:15.386177] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089700) on tqpair=0x1027690 00:28:04.397 [2024-11-27 07:23:15.386246] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:28:04.397 [2024-11-27 07:23:15.386257] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:04.397 [2024-11-27 07:23:15.386265] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.397 [2024-11-27 07:23:15.386269] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1027690) 00:28:04.397 [2024-11-27 07:23:15.386276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.397 [2024-11-27 07:23:15.386287] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089700, cid 4, qid 0 00:28:04.397 [2024-11-27 07:23:15.386506] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:04.397 [2024-11-27 07:23:15.386513] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:04.397 [2024-11-27 07:23:15.386517] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:04.397 [2024-11-27 07:23:15.386520] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1027690): datao=0, datal=4096, cccid=4 00:28:04.397 [2024-11-27 07:23:15.386525] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1089700) on tqpair(0x1027690): expected_datao=0, payload_size=4096 00:28:04.397 [2024-11-27 07:23:15.386530] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.397 [2024-11-27 07:23:15.386547] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:04.397 [2024-11-27 07:23:15.386551] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:04.397 [2024-11-27 07:23:15.427388] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.397 [2024-11-27 07:23:15.427400] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.397 [2024-11-27 07:23:15.427404] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.397 [2024-11-27 07:23:15.427408] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089700) on tqpair=0x1027690 00:28:04.397 [2024-11-27 07:23:15.427423] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:28:04.397 [2024-11-27 07:23:15.427433] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:28:04.397 [2024-11-27 07:23:15.427443] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:28:04.397 [2024-11-27 07:23:15.427451] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.397 [2024-11-27 07:23:15.427455] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1027690) 00:28:04.397 [2024-11-27 07:23:15.427464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.397 [2024-11-27 07:23:15.427478] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089700, cid 4, qid 0 00:28:04.397 [2024-11-27 07:23:15.427629] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:04.397 [2024-11-27 07:23:15.427636] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:04.397 [2024-11-27 07:23:15.427639] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:04.397 [2024-11-27 07:23:15.427643] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1027690): datao=0, datal=4096, cccid=4 00:28:04.397 [2024-11-27 07:23:15.427647] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1089700) on tqpair(0x1027690): expected_datao=0, payload_size=4096 00:28:04.397 [2024-11-27 07:23:15.427652] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.397 [2024-11-27 07:23:15.427669] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:04.397 [2024-11-27 07:23:15.427673] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:04.397 [2024-11-27 07:23:15.468340] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.397 [2024-11-27 07:23:15.468350] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.398 [2024-11-27 07:23:15.468353] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.398 [2024-11-27 07:23:15.468357] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089700) on tqpair=0x1027690 00:28:04.398 [2024-11-27 07:23:15.468370] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:04.398 [2024-11-27 07:23:15.468381] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:04.398 [2024-11-27 07:23:15.468389] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.398 [2024-11-27 07:23:15.468393] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1027690) 00:28:04.398 [2024-11-27 07:23:15.468400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.398 [2024-11-27 07:23:15.468412] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089700, cid 4, qid 0 00:28:04.398 [2024-11-27 07:23:15.468566] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:04.398 [2024-11-27 07:23:15.468573] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:04.398 [2024-11-27 07:23:15.468576] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:04.398 [2024-11-27 07:23:15.468580] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1027690): datao=0, datal=4096, cccid=4 00:28:04.398 [2024-11-27 07:23:15.468585] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1089700) on tqpair(0x1027690): expected_datao=0, payload_size=4096 00:28:04.398 [2024-11-27 07:23:15.468589] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.398 [2024-11-27 07:23:15.468633] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:04.398 [2024-11-27 07:23:15.468637] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:04.398 [2024-11-27 07:23:15.509352] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.398 [2024-11-27 07:23:15.509364] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.398 [2024-11-27 07:23:15.509367] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.398 [2024-11-27 07:23:15.509371] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089700) on tqpair=0x1027690 00:28:04.398 [2024-11-27 07:23:15.509387] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:04.398 [2024-11-27 07:23:15.509396] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:28:04.398 [2024-11-27 07:23:15.509408] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:28:04.398 [2024-11-27 07:23:15.509415] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:28:04.398 [2024-11-27 07:23:15.509421] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:04.398 [2024-11-27 07:23:15.509426] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:28:04.398 [2024-11-27 07:23:15.509432] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:28:04.398 [2024-11-27 07:23:15.509437] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:28:04.398 [2024-11-27 07:23:15.509443] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:28:04.398 [2024-11-27 07:23:15.509461] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.398 [2024-11-27 07:23:15.509465] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1027690) 00:28:04.398 [2024-11-27 07:23:15.509473] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.398 [2024-11-27 07:23:15.509480] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.398 [2024-11-27 07:23:15.509484] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.398 [2024-11-27 07:23:15.509487] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1027690) 00:28:04.398 [2024-11-27 07:23:15.509494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.398 [2024-11-27 07:23:15.509510] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089700, cid 4, qid 0 00:28:04.398 [2024-11-27 07:23:15.509515] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089880, cid 5, qid 0 00:28:04.398 [2024-11-27 07:23:15.509632] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.398 [2024-11-27 07:23:15.509639] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.398 [2024-11-27 07:23:15.509642] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.398 [2024-11-27 07:23:15.509646] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089700) on tqpair=0x1027690 00:28:04.398 [2024-11-27 07:23:15.509653] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.398 [2024-11-27 07:23:15.509659] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.398 [2024-11-27 07:23:15.509663] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.398 [2024-11-27 07:23:15.509667] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089880) on tqpair=0x1027690 00:28:04.398 [2024-11-27 07:23:15.509676] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.398 [2024-11-27 07:23:15.509680] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1027690) 00:28:04.398 [2024-11-27 07:23:15.509686] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.398 [2024-11-27 07:23:15.509697] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089880, cid 5, qid 0 00:28:04.398 [2024-11-27 07:23:15.509869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.398 [2024-11-27 07:23:15.509876] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.398 [2024-11-27 07:23:15.509879] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.398 [2024-11-27 07:23:15.509883] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089880) on tqpair=0x1027690 00:28:04.398 [2024-11-27 07:23:15.509893] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.398 [2024-11-27 07:23:15.509899] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1027690) 00:28:04.398 [2024-11-27 07:23:15.509906] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.398 [2024-11-27 07:23:15.509916] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089880, cid 5, qid 0 00:28:04.398 [2024-11-27 07:23:15.510134] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.398 [2024-11-27 07:23:15.510140] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.398 [2024-11-27 07:23:15.510144] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.398 [2024-11-27 07:23:15.510148] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089880) on tqpair=0x1027690 00:28:04.398 [2024-11-27 07:23:15.510157] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.398 [2024-11-27 07:23:15.510169] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1027690) 00:28:04.398 [2024-11-27 07:23:15.510176] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.398 [2024-11-27 07:23:15.510186] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089880, cid 5, qid 0 00:28:04.398 [2024-11-27 07:23:15.510385] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.398 [2024-11-27 07:23:15.510391] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.398 [2024-11-27 07:23:15.510395] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.398 [2024-11-27 07:23:15.510399] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089880) on tqpair=0x1027690 00:28:04.398 [2024-11-27 07:23:15.510415] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.398 [2024-11-27 07:23:15.510420] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1027690) 00:28:04.398 [2024-11-27 07:23:15.510427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.398 [2024-11-27 07:23:15.510434] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.398 [2024-11-27 07:23:15.510438] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1027690) 00:28:04.398 [2024-11-27 07:23:15.510445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.398 [2024-11-27 07:23:15.510452] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.398 [2024-11-27 07:23:15.510456] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1027690) 00:28:04.398 [2024-11-27 07:23:15.510463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.398 [2024-11-27 07:23:15.510471] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.398 [2024-11-27 07:23:15.510475] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1027690) 00:28:04.398 [2024-11-27 07:23:15.510481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.398 [2024-11-27 07:23:15.510492] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089880, cid 5, qid 0 00:28:04.398 [2024-11-27 07:23:15.510500] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089700, cid 4, qid 0 00:28:04.398 [2024-11-27 07:23:15.510504] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089a00, cid 6, qid 0 00:28:04.398 [2024-11-27 07:23:15.510509] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089b80, cid 7, qid 0 00:28:04.398 [2024-11-27 07:23:15.510773] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:04.398 [2024-11-27 07:23:15.510780] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:04.398 [2024-11-27 07:23:15.510789] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:04.398 [2024-11-27 07:23:15.510793] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1027690): datao=0, datal=8192, cccid=5 00:28:04.398 [2024-11-27 07:23:15.510797] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1089880) on tqpair(0x1027690): expected_datao=0, payload_size=8192 00:28:04.398 [2024-11-27 07:23:15.510802] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.398 [2024-11-27 07:23:15.510904] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:04.398 [2024-11-27 07:23:15.510909] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:04.398 [2024-11-27 07:23:15.510915] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:04.398 [2024-11-27 07:23:15.510921] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:04.398 [2024-11-27 07:23:15.510925] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:04.398 [2024-11-27 07:23:15.510928] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1027690): datao=0, datal=512, cccid=4 00:28:04.398 [2024-11-27 07:23:15.510933] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1089700) on tqpair(0x1027690): expected_datao=0, payload_size=512 00:28:04.399 [2024-11-27 07:23:15.510937] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.399 [2024-11-27 07:23:15.510944] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:04.399 [2024-11-27 07:23:15.510947] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:04.399 [2024-11-27 07:23:15.510953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:04.399 [2024-11-27 07:23:15.510959] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:04.399 [2024-11-27 07:23:15.510962] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:04.399 [2024-11-27 07:23:15.510966] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1027690): datao=0, datal=512, cccid=6 00:28:04.399 [2024-11-27 07:23:15.510970] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1089a00) on tqpair(0x1027690): expected_datao=0, payload_size=512 00:28:04.399 [2024-11-27 07:23:15.510975] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.399 [2024-11-27 07:23:15.510981] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:04.399 [2024-11-27 07:23:15.510984] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:04.399 [2024-11-27 07:23:15.510990] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:04.399 [2024-11-27 07:23:15.510996] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:04.399 [2024-11-27 07:23:15.510999] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:04.399 [2024-11-27 07:23:15.511003] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1027690): datao=0, datal=4096, cccid=7 00:28:04.399 [2024-11-27 07:23:15.511007] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1089b80) on tqpair(0x1027690): expected_datao=0, payload_size=4096 00:28:04.399 [2024-11-27 07:23:15.511012] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.399 [2024-11-27 07:23:15.511019] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:04.399 [2024-11-27 07:23:15.511022] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:04.399 [2024-11-27 07:23:15.511032] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.399 [2024-11-27 07:23:15.511038] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.399 [2024-11-27 07:23:15.511042] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.399 [2024-11-27 07:23:15.511046] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089880) on tqpair=0x1027690 00:28:04.399 [2024-11-27 07:23:15.511058] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.399 [2024-11-27 07:23:15.511064] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.399 [2024-11-27 07:23:15.511068] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.399 [2024-11-27 07:23:15.511072] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089700) on tqpair=0x1027690 00:28:04.399 [2024-11-27 07:23:15.511084] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.399 [2024-11-27 07:23:15.511091] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.399 [2024-11-27 07:23:15.511094] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.399 [2024-11-27 07:23:15.511098] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089a00) on tqpair=0x1027690 00:28:04.399 [2024-11-27 07:23:15.511105] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.399 [2024-11-27 07:23:15.511111] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.399 [2024-11-27 07:23:15.511115] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.399 [2024-11-27 07:23:15.511119] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089b80) on tqpair=0x1027690 00:28:04.399 ===================================================== 00:28:04.399 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:04.399 ===================================================== 00:28:04.399 Controller Capabilities/Features 00:28:04.399 ================================ 00:28:04.399 Vendor ID: 8086 00:28:04.399 Subsystem Vendor ID: 8086 00:28:04.399 Serial Number: SPDK00000000000001 00:28:04.399 Model Number: SPDK bdev Controller 00:28:04.399 Firmware Version: 25.01 00:28:04.399 Recommended Arb Burst: 6 00:28:04.399 IEEE OUI Identifier: e4 d2 5c 00:28:04.399 Multi-path I/O 00:28:04.399 May have multiple subsystem ports: Yes 00:28:04.399 May have multiple controllers: Yes 00:28:04.399 Associated with SR-IOV VF: No 00:28:04.399 Max Data Transfer Size: 131072 00:28:04.399 Max Number of Namespaces: 32 00:28:04.399 Max Number of I/O Queues: 127 00:28:04.399 NVMe Specification Version (VS): 1.3 00:28:04.399 NVMe Specification Version (Identify): 1.3 00:28:04.399 Maximum Queue Entries: 128 00:28:04.399 Contiguous Queues Required: Yes 00:28:04.399 Arbitration Mechanisms Supported 00:28:04.399 Weighted Round Robin: Not Supported 00:28:04.399 Vendor Specific: Not Supported 00:28:04.399 Reset Timeout: 15000 ms 00:28:04.399 Doorbell Stride: 4 bytes 00:28:04.399 NVM Subsystem Reset: Not Supported 00:28:04.399 Command Sets Supported 00:28:04.399 NVM Command Set: Supported 00:28:04.399 Boot Partition: Not Supported 00:28:04.399 Memory Page Size Minimum: 4096 bytes 00:28:04.399 Memory Page Size Maximum: 4096 bytes 00:28:04.399 Persistent Memory Region: Not Supported 00:28:04.399 Optional Asynchronous Events Supported 00:28:04.399 Namespace Attribute Notices: Supported 00:28:04.399 Firmware Activation Notices: Not Supported 00:28:04.399 ANA Change Notices: Not Supported 00:28:04.399 PLE Aggregate Log Change Notices: Not Supported 00:28:04.399 LBA Status Info Alert Notices: Not Supported 00:28:04.399 EGE Aggregate Log Change Notices: Not Supported 00:28:04.399 Normal NVM Subsystem Shutdown event: Not Supported 00:28:04.399 Zone Descriptor Change Notices: Not Supported 00:28:04.399 Discovery Log Change Notices: Not Supported 00:28:04.399 Controller Attributes 00:28:04.399 128-bit Host Identifier: Supported 00:28:04.399 Non-Operational Permissive Mode: Not Supported 00:28:04.399 NVM Sets: Not Supported 00:28:04.399 Read Recovery Levels: Not Supported 00:28:04.399 Endurance Groups: Not Supported 00:28:04.399 Predictable Latency Mode: Not Supported 00:28:04.399 Traffic Based Keep ALive: Not Supported 00:28:04.399 Namespace Granularity: Not Supported 00:28:04.399 SQ Associations: Not Supported 00:28:04.399 UUID List: Not Supported 00:28:04.399 Multi-Domain Subsystem: Not Supported 00:28:04.399 Fixed Capacity Management: Not Supported 00:28:04.399 Variable Capacity Management: Not Supported 00:28:04.399 Delete Endurance Group: Not Supported 00:28:04.399 Delete NVM Set: Not Supported 00:28:04.399 Extended LBA Formats Supported: Not Supported 00:28:04.399 Flexible Data Placement Supported: Not Supported 00:28:04.399 00:28:04.399 Controller Memory Buffer Support 00:28:04.399 ================================ 00:28:04.399 Supported: No 00:28:04.399 00:28:04.399 Persistent Memory Region Support 00:28:04.399 ================================ 00:28:04.399 Supported: No 00:28:04.399 00:28:04.399 Admin Command Set Attributes 00:28:04.399 ============================ 00:28:04.399 Security Send/Receive: Not Supported 00:28:04.399 Format NVM: Not Supported 00:28:04.399 Firmware Activate/Download: Not Supported 00:28:04.399 Namespace Management: Not Supported 00:28:04.399 Device Self-Test: Not Supported 00:28:04.399 Directives: Not Supported 00:28:04.399 NVMe-MI: Not Supported 00:28:04.399 Virtualization Management: Not Supported 00:28:04.399 Doorbell Buffer Config: Not Supported 00:28:04.399 Get LBA Status Capability: Not Supported 00:28:04.399 Command & Feature Lockdown Capability: Not Supported 00:28:04.399 Abort Command Limit: 4 00:28:04.399 Async Event Request Limit: 4 00:28:04.399 Number of Firmware Slots: N/A 00:28:04.399 Firmware Slot 1 Read-Only: N/A 00:28:04.399 Firmware Activation Without Reset: N/A 00:28:04.399 Multiple Update Detection Support: N/A 00:28:04.399 Firmware Update Granularity: No Information Provided 00:28:04.399 Per-Namespace SMART Log: No 00:28:04.399 Asymmetric Namespace Access Log Page: Not Supported 00:28:04.399 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:04.399 Command Effects Log Page: Supported 00:28:04.399 Get Log Page Extended Data: Supported 00:28:04.399 Telemetry Log Pages: Not Supported 00:28:04.399 Persistent Event Log Pages: Not Supported 00:28:04.399 Supported Log Pages Log Page: May Support 00:28:04.399 Commands Supported & Effects Log Page: Not Supported 00:28:04.399 Feature Identifiers & Effects Log Page:May Support 00:28:04.399 NVMe-MI Commands & Effects Log Page: May Support 00:28:04.399 Data Area 4 for Telemetry Log: Not Supported 00:28:04.399 Error Log Page Entries Supported: 128 00:28:04.399 Keep Alive: Supported 00:28:04.399 Keep Alive Granularity: 10000 ms 00:28:04.399 00:28:04.399 NVM Command Set Attributes 00:28:04.399 ========================== 00:28:04.399 Submission Queue Entry Size 00:28:04.399 Max: 64 00:28:04.399 Min: 64 00:28:04.399 Completion Queue Entry Size 00:28:04.399 Max: 16 00:28:04.399 Min: 16 00:28:04.399 Number of Namespaces: 32 00:28:04.399 Compare Command: Supported 00:28:04.399 Write Uncorrectable Command: Not Supported 00:28:04.399 Dataset Management Command: Supported 00:28:04.399 Write Zeroes Command: Supported 00:28:04.399 Set Features Save Field: Not Supported 00:28:04.399 Reservations: Supported 00:28:04.399 Timestamp: Not Supported 00:28:04.399 Copy: Supported 00:28:04.399 Volatile Write Cache: Present 00:28:04.399 Atomic Write Unit (Normal): 1 00:28:04.399 Atomic Write Unit (PFail): 1 00:28:04.399 Atomic Compare & Write Unit: 1 00:28:04.399 Fused Compare & Write: Supported 00:28:04.399 Scatter-Gather List 00:28:04.399 SGL Command Set: Supported 00:28:04.399 SGL Keyed: Supported 00:28:04.399 SGL Bit Bucket Descriptor: Not Supported 00:28:04.400 SGL Metadata Pointer: Not Supported 00:28:04.400 Oversized SGL: Not Supported 00:28:04.400 SGL Metadata Address: Not Supported 00:28:04.400 SGL Offset: Supported 00:28:04.400 Transport SGL Data Block: Not Supported 00:28:04.400 Replay Protected Memory Block: Not Supported 00:28:04.400 00:28:04.400 Firmware Slot Information 00:28:04.400 ========================= 00:28:04.400 Active slot: 1 00:28:04.400 Slot 1 Firmware Revision: 25.01 00:28:04.400 00:28:04.400 00:28:04.400 Commands Supported and Effects 00:28:04.400 ============================== 00:28:04.400 Admin Commands 00:28:04.400 -------------- 00:28:04.400 Get Log Page (02h): Supported 00:28:04.400 Identify (06h): Supported 00:28:04.400 Abort (08h): Supported 00:28:04.400 Set Features (09h): Supported 00:28:04.400 Get Features (0Ah): Supported 00:28:04.400 Asynchronous Event Request (0Ch): Supported 00:28:04.400 Keep Alive (18h): Supported 00:28:04.400 I/O Commands 00:28:04.400 ------------ 00:28:04.400 Flush (00h): Supported LBA-Change 00:28:04.400 Write (01h): Supported LBA-Change 00:28:04.400 Read (02h): Supported 00:28:04.400 Compare (05h): Supported 00:28:04.400 Write Zeroes (08h): Supported LBA-Change 00:28:04.400 Dataset Management (09h): Supported LBA-Change 00:28:04.400 Copy (19h): Supported LBA-Change 00:28:04.400 00:28:04.400 Error Log 00:28:04.400 ========= 00:28:04.400 00:28:04.400 Arbitration 00:28:04.400 =========== 00:28:04.400 Arbitration Burst: 1 00:28:04.400 00:28:04.400 Power Management 00:28:04.400 ================ 00:28:04.400 Number of Power States: 1 00:28:04.400 Current Power State: Power State #0 00:28:04.400 Power State #0: 00:28:04.400 Max Power: 0.00 W 00:28:04.400 Non-Operational State: Operational 00:28:04.400 Entry Latency: Not Reported 00:28:04.400 Exit Latency: Not Reported 00:28:04.400 Relative Read Throughput: 0 00:28:04.400 Relative Read Latency: 0 00:28:04.400 Relative Write Throughput: 0 00:28:04.400 Relative Write Latency: 0 00:28:04.400 Idle Power: Not Reported 00:28:04.400 Active Power: Not Reported 00:28:04.400 Non-Operational Permissive Mode: Not Supported 00:28:04.400 00:28:04.400 Health Information 00:28:04.400 ================== 00:28:04.400 Critical Warnings: 00:28:04.400 Available Spare Space: OK 00:28:04.400 Temperature: OK 00:28:04.400 Device Reliability: OK 00:28:04.400 Read Only: No 00:28:04.400 Volatile Memory Backup: OK 00:28:04.400 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:04.400 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:28:04.400 Available Spare: 0% 00:28:04.400 Available Spare Threshold: 0% 00:28:04.400 Life Percentage Used:[2024-11-27 07:23:15.515233] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.400 [2024-11-27 07:23:15.515240] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1027690) 00:28:04.400 [2024-11-27 07:23:15.515247] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-27 07:23:15.515260] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089b80, cid 7, qid 0 00:28:04.400 [2024-11-27 07:23:15.515480] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.400 [2024-11-27 07:23:15.515487] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.400 [2024-11-27 07:23:15.515490] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.400 [2024-11-27 07:23:15.515494] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089b80) on tqpair=0x1027690 00:28:04.400 [2024-11-27 07:23:15.515533] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:28:04.400 [2024-11-27 07:23:15.515543] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089100) on tqpair=0x1027690 00:28:04.400 [2024-11-27 07:23:15.515549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-27 07:23:15.515555] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089280) on tqpair=0x1027690 00:28:04.400 [2024-11-27 07:23:15.515560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-27 07:23:15.515565] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089400) on tqpair=0x1027690 00:28:04.400 [2024-11-27 07:23:15.515569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-27 07:23:15.515574] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:28:04.400 [2024-11-27 07:23:15.515579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-27 07:23:15.515588] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.400 [2024-11-27 07:23:15.515592] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.400 [2024-11-27 07:23:15.515595] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:28:04.400 [2024-11-27 07:23:15.515602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-27 07:23:15.515614] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:28:04.400 [2024-11-27 07:23:15.515831] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.400 [2024-11-27 07:23:15.515837] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.400 [2024-11-27 07:23:15.515841] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.400 [2024-11-27 07:23:15.515845] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:28:04.400 [2024-11-27 07:23:15.515852] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.400 [2024-11-27 07:23:15.515861] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.400 [2024-11-27 07:23:15.515865] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:28:04.400 [2024-11-27 07:23:15.515872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-27 07:23:15.515885] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:28:04.400 [2024-11-27 07:23:15.516062] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.400 [2024-11-27 07:23:15.516068] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.400 [2024-11-27 07:23:15.516071] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.400 [2024-11-27 07:23:15.516075] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:28:04.400 [2024-11-27 07:23:15.516080] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:28:04.400 [2024-11-27 07:23:15.516085] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:28:04.400 [2024-11-27 07:23:15.516095] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.400 [2024-11-27 07:23:15.516099] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.400 [2024-11-27 07:23:15.516103] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:28:04.400 [2024-11-27 07:23:15.516110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-27 07:23:15.516121] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:28:04.400 [2024-11-27 07:23:15.516334] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.400 [2024-11-27 07:23:15.516341] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.400 [2024-11-27 07:23:15.516345] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.400 [2024-11-27 07:23:15.516348] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:28:04.400 [2024-11-27 07:23:15.516359] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.400 [2024-11-27 07:23:15.516363] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.400 [2024-11-27 07:23:15.516366] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:28:04.400 [2024-11-27 07:23:15.516373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-27 07:23:15.516383] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:28:04.400 [2024-11-27 07:23:15.516585] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.400 [2024-11-27 07:23:15.516591] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.401 [2024-11-27 07:23:15.516595] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.401 [2024-11-27 07:23:15.516598] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:28:04.401 [2024-11-27 07:23:15.516609] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.401 [2024-11-27 07:23:15.516613] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.401 [2024-11-27 07:23:15.516616] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:28:04.401 [2024-11-27 07:23:15.516623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-27 07:23:15.516633] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:28:04.401 [2024-11-27 07:23:15.516838] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.401 [2024-11-27 07:23:15.516845] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.401 [2024-11-27 07:23:15.516848] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.401 [2024-11-27 07:23:15.516855] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:28:04.401 [2024-11-27 07:23:15.516865] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.401 [2024-11-27 07:23:15.516869] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.401 [2024-11-27 07:23:15.516872] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:28:04.401 [2024-11-27 07:23:15.516879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-27 07:23:15.516889] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:28:04.401 [2024-11-27 07:23:15.517054] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.401 [2024-11-27 07:23:15.517060] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.401 [2024-11-27 07:23:15.517064] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.401 [2024-11-27 07:23:15.517068] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:28:04.401 [2024-11-27 07:23:15.517077] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.401 [2024-11-27 07:23:15.517081] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.401 [2024-11-27 07:23:15.517085] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:28:04.401 [2024-11-27 07:23:15.517092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-27 07:23:15.517102] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:28:04.401 [2024-11-27 07:23:15.517293] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.401 [2024-11-27 07:23:15.517299] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.401 [2024-11-27 07:23:15.517303] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.401 [2024-11-27 07:23:15.517307] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:28:04.401 [2024-11-27 07:23:15.517317] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.401 [2024-11-27 07:23:15.517321] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.401 [2024-11-27 07:23:15.517324] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:28:04.401 [2024-11-27 07:23:15.517331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-27 07:23:15.517344] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:28:04.401 [2024-11-27 07:23:15.517592] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.401 [2024-11-27 07:23:15.517599] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.401 [2024-11-27 07:23:15.517602] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.401 [2024-11-27 07:23:15.517606] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:28:04.401 [2024-11-27 07:23:15.517616] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.401 [2024-11-27 07:23:15.517620] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.401 [2024-11-27 07:23:15.517623] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:28:04.401 [2024-11-27 07:23:15.517630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-27 07:23:15.517640] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:28:04.401 [2024-11-27 07:23:15.517845] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.401 [2024-11-27 07:23:15.517851] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.401 [2024-11-27 07:23:15.517855] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.401 [2024-11-27 07:23:15.517859] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:28:04.401 [2024-11-27 07:23:15.517871] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.401 [2024-11-27 07:23:15.517875] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.401 [2024-11-27 07:23:15.517879] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:28:04.401 [2024-11-27 07:23:15.517885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-27 07:23:15.517896] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:28:04.401 [2024-11-27 07:23:15.518076] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.401 [2024-11-27 07:23:15.518083] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.401 [2024-11-27 07:23:15.518086] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.401 [2024-11-27 07:23:15.518090] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:28:04.401 [2024-11-27 07:23:15.518100] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.401 [2024-11-27 07:23:15.518104] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.401 [2024-11-27 07:23:15.518107] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:28:04.401 [2024-11-27 07:23:15.518114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-27 07:23:15.518124] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:28:04.401 [2024-11-27 07:23:15.518350] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.401 [2024-11-27 07:23:15.518357] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.401 [2024-11-27 07:23:15.518360] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.401 [2024-11-27 07:23:15.518364] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:28:04.401 [2024-11-27 07:23:15.518374] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.401 [2024-11-27 07:23:15.518378] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.401 [2024-11-27 07:23:15.518382] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:28:04.401 [2024-11-27 07:23:15.518389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-27 07:23:15.518399] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:28:04.401 [2024-11-27 07:23:15.518602] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.401 [2024-11-27 07:23:15.518608] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.401 [2024-11-27 07:23:15.518612] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.401 [2024-11-27 07:23:15.518616] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:28:04.401 [2024-11-27 07:23:15.518626] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.401 [2024-11-27 07:23:15.518630] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.401 [2024-11-27 07:23:15.518634] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:28:04.401 [2024-11-27 07:23:15.518640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-27 07:23:15.518651] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:28:04.401 [2024-11-27 07:23:15.518854] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.401 [2024-11-27 07:23:15.518860] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.401 [2024-11-27 07:23:15.518863] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.401 [2024-11-27 07:23:15.518867] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:28:04.401 [2024-11-27 07:23:15.518878] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.401 [2024-11-27 07:23:15.518884] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.401 [2024-11-27 07:23:15.518888] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:28:04.401 [2024-11-27 07:23:15.518894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-27 07:23:15.518905] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:28:04.401 [2024-11-27 07:23:15.519084] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.401 [2024-11-27 07:23:15.519091] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.401 [2024-11-27 07:23:15.519094] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.401 [2024-11-27 07:23:15.519098] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:28:04.401 [2024-11-27 07:23:15.519108] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.401 [2024-11-27 07:23:15.519112] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.401 [2024-11-27 07:23:15.519115] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:28:04.401 [2024-11-27 07:23:15.519122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-27 07:23:15.519133] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:28:04.401 [2024-11-27 07:23:15.523174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.401 [2024-11-27 07:23:15.523183] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.401 [2024-11-27 07:23:15.523187] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.401 [2024-11-27 07:23:15.523191] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:28:04.401 [2024-11-27 07:23:15.523199] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:28:04.401 0% 00:28:04.401 Data Units Read: 0 00:28:04.401 Data Units Written: 0 00:28:04.401 Host Read Commands: 0 00:28:04.401 Host Write Commands: 0 00:28:04.401 Controller Busy Time: 0 minutes 00:28:04.401 Power Cycles: 0 00:28:04.401 Power On Hours: 0 hours 00:28:04.401 Unsafe Shutdowns: 0 00:28:04.401 Unrecoverable Media Errors: 0 00:28:04.401 Lifetime Error Log Entries: 0 00:28:04.402 Warning Temperature Time: 0 minutes 00:28:04.402 Critical Temperature Time: 0 minutes 00:28:04.402 00:28:04.402 Number of Queues 00:28:04.402 ================ 00:28:04.402 Number of I/O Submission Queues: 127 00:28:04.402 Number of I/O Completion Queues: 127 00:28:04.402 00:28:04.402 Active Namespaces 00:28:04.402 ================= 00:28:04.402 Namespace ID:1 00:28:04.402 Error Recovery Timeout: Unlimited 00:28:04.402 Command Set Identifier: NVM (00h) 00:28:04.402 Deallocate: Supported 00:28:04.402 Deallocated/Unwritten Error: Not Supported 00:28:04.402 Deallocated Read Value: Unknown 00:28:04.402 Deallocate in Write Zeroes: Not Supported 00:28:04.402 Deallocated Guard Field: 0xFFFF 00:28:04.402 Flush: Supported 00:28:04.402 Reservation: Supported 00:28:04.402 Namespace Sharing Capabilities: Multiple Controllers 00:28:04.402 Size (in LBAs): 131072 (0GiB) 00:28:04.402 Capacity (in LBAs): 131072 (0GiB) 00:28:04.402 Utilization (in LBAs): 131072 (0GiB) 00:28:04.402 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:04.402 EUI64: ABCDEF0123456789 00:28:04.402 UUID: 823809eb-1a6c-4281-ba91-e1411b64c100 00:28:04.402 Thin Provisioning: Not Supported 00:28:04.402 Per-NS Atomic Units: Yes 00:28:04.402 Atomic Boundary Size (Normal): 0 00:28:04.402 Atomic Boundary Size (PFail): 0 00:28:04.402 Atomic Boundary Offset: 0 00:28:04.402 Maximum Single Source Range Length: 65535 00:28:04.402 Maximum Copy Length: 65535 00:28:04.402 Maximum Source Range Count: 1 00:28:04.402 NGUID/EUI64 Never Reused: No 00:28:04.402 Namespace Write Protected: No 00:28:04.402 Number of LBA Formats: 1 00:28:04.402 Current LBA Format: LBA Format #00 00:28:04.402 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:04.402 00:28:04.402 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:04.402 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:04.402 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.402 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:04.402 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.402 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:04.402 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:04.402 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:04.402 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:28:04.402 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:04.402 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:28:04.402 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:04.402 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:04.402 rmmod nvme_tcp 00:28:04.402 rmmod nvme_fabrics 00:28:04.664 rmmod nvme_keyring 00:28:04.664 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:04.664 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:28:04.664 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:28:04.664 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2490258 ']' 00:28:04.664 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2490258 00:28:04.664 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2490258 ']' 00:28:04.664 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2490258 00:28:04.664 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:28:04.664 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:04.664 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2490258 00:28:04.664 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:04.664 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:04.664 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2490258' 00:28:04.664 killing process with pid 2490258 00:28:04.664 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2490258 00:28:04.664 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2490258 00:28:04.926 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:04.926 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:04.926 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:04.926 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:28:04.926 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:28:04.926 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:04.926 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:28:04.926 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:04.926 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:04.926 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:04.926 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:04.926 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:06.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:06.844 00:28:06.844 real 0m11.780s 00:28:06.844 user 0m9.108s 00:28:06.844 sys 0m6.111s 00:28:06.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:06.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:06.844 ************************************ 00:28:06.844 END TEST nvmf_identify 00:28:06.844 ************************************ 00:28:06.844 07:23:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:06.844 07:23:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:06.844 07:23:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:06.844 07:23:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.106 ************************************ 00:28:07.106 START TEST nvmf_perf 00:28:07.106 ************************************ 00:28:07.106 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:07.106 * Looking for test storage... 00:28:07.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:07.106 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:07.106 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:28:07.106 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:07.106 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:07.106 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:07.106 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:07.106 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:07.106 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:28:07.106 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:28:07.106 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:28:07.106 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:28:07.106 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:28:07.106 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:28:07.106 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:28:07.106 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:07.106 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:28:07.106 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:28:07.106 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:07.106 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:07.106 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:28:07.106 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:28:07.106 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:07.106 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:28:07.106 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:07.106 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:28:07.106 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:28:07.106 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:07.106 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:28:07.106 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:07.106 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:07.106 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:07.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.107 --rc genhtml_branch_coverage=1 00:28:07.107 --rc genhtml_function_coverage=1 00:28:07.107 --rc genhtml_legend=1 00:28:07.107 --rc geninfo_all_blocks=1 00:28:07.107 --rc geninfo_unexecuted_blocks=1 00:28:07.107 00:28:07.107 ' 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:07.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.107 --rc genhtml_branch_coverage=1 00:28:07.107 --rc genhtml_function_coverage=1 00:28:07.107 --rc genhtml_legend=1 00:28:07.107 --rc geninfo_all_blocks=1 00:28:07.107 --rc geninfo_unexecuted_blocks=1 00:28:07.107 00:28:07.107 ' 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:07.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.107 --rc genhtml_branch_coverage=1 00:28:07.107 --rc genhtml_function_coverage=1 00:28:07.107 --rc genhtml_legend=1 00:28:07.107 --rc geninfo_all_blocks=1 00:28:07.107 --rc geninfo_unexecuted_blocks=1 00:28:07.107 00:28:07.107 ' 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:07.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.107 --rc genhtml_branch_coverage=1 00:28:07.107 --rc genhtml_function_coverage=1 00:28:07.107 --rc genhtml_legend=1 00:28:07.107 --rc geninfo_all_blocks=1 00:28:07.107 --rc geninfo_unexecuted_blocks=1 00:28:07.107 00:28:07.107 ' 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:07.107 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:07.107 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.369 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:07.369 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:07.369 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:28:07.369 07:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:15.521 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:15.521 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:15.521 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.521 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:15.522 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:15.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:15.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:28:15.522 00:28:15.522 --- 10.0.0.2 ping statistics --- 00:28:15.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.522 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:15.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:15.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:28:15.522 00:28:15.522 --- 10.0.0.1 ping statistics --- 00:28:15.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.522 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2494623 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2494623 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2494623 ']' 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:15.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:15.522 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:15.522 [2024-11-27 07:23:25.928668] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:28:15.522 [2024-11-27 07:23:25.928735] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:15.522 [2024-11-27 07:23:26.030625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:15.522 [2024-11-27 07:23:26.084364] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:15.522 [2024-11-27 07:23:26.084415] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:15.522 [2024-11-27 07:23:26.084424] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:15.522 [2024-11-27 07:23:26.084431] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:15.522 [2024-11-27 07:23:26.084437] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:15.522 [2024-11-27 07:23:26.086331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:15.522 [2024-11-27 07:23:26.086494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:15.522 [2024-11-27 07:23:26.086658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.522 [2024-11-27 07:23:26.086659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:15.784 07:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:15.784 07:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:28:15.784 07:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:15.784 07:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:15.784 07:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:15.784 07:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:15.784 07:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:15.784 07:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:16.356 07:23:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:16.356 07:23:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:16.356 07:23:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:28:16.356 07:23:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:16.617 07:23:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:16.617 07:23:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:28:16.617 07:23:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:16.617 07:23:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:16.617 07:23:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:16.878 [2024-11-27 07:23:27.906996] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:16.878 07:23:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:17.138 07:23:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:17.138 07:23:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:17.138 07:23:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:17.138 07:23:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:17.398 07:23:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:17.659 [2024-11-27 07:23:28.645707] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:17.659 07:23:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:17.659 07:23:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:28:17.659 07:23:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:28:17.659 07:23:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:17.659 07:23:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:28:19.041 Initializing NVMe Controllers 00:28:19.041 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:28:19.041 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:28:19.041 Initialization complete. Launching workers. 00:28:19.041 ======================================================== 00:28:19.041 Latency(us) 00:28:19.041 Device Information : IOPS MiB/s Average min max 00:28:19.041 PCIE (0000:65:00.0) NSID 1 from core 0: 77837.56 304.05 410.46 13.27 5095.06 00:28:19.041 ======================================================== 00:28:19.041 Total : 77837.56 304.05 410.46 13.27 5095.06 00:28:19.041 00:28:19.041 07:23:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:20.424 Initializing NVMe Controllers 00:28:20.424 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:20.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:20.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:20.424 Initialization complete. Launching workers. 00:28:20.424 ======================================================== 00:28:20.424 Latency(us) 00:28:20.424 Device Information : IOPS MiB/s Average min max 00:28:20.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 99.00 0.39 10412.17 226.79 45847.42 00:28:20.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 46.00 0.18 21839.39 7962.52 47892.76 00:28:20.424 ======================================================== 00:28:20.424 Total : 145.00 0.57 14037.36 226.79 47892.76 00:28:20.424 00:28:20.424 07:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:21.806 Initializing NVMe Controllers 00:28:21.806 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:21.807 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:21.807 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:21.807 Initialization complete. Launching workers. 00:28:21.807 ======================================================== 00:28:21.807 Latency(us) 00:28:21.807 Device Information : IOPS MiB/s Average min max 00:28:21.807 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11834.02 46.23 2703.83 414.59 9089.20 00:28:21.807 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3815.14 14.90 8387.62 7284.05 18520.76 00:28:21.807 ======================================================== 00:28:21.807 Total : 15649.16 61.13 4089.50 414.59 18520.76 00:28:21.807 00:28:21.807 07:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:21.807 07:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:21.807 07:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:24.349 Initializing NVMe Controllers 00:28:24.349 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:24.349 Controller IO queue size 128, less than required. 00:28:24.349 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:24.349 Controller IO queue size 128, less than required. 00:28:24.350 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:24.350 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:24.350 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:24.350 Initialization complete. Launching workers. 00:28:24.350 ======================================================== 00:28:24.350 Latency(us) 00:28:24.350 Device Information : IOPS MiB/s Average min max 00:28:24.350 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1853.50 463.37 70361.69 42634.76 124628.34 00:28:24.350 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 605.00 151.25 218112.82 43077.59 375564.68 00:28:24.350 ======================================================== 00:28:24.350 Total : 2458.50 614.62 106721.03 42634.76 375564.68 00:28:24.350 00:28:24.350 07:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:24.350 No valid NVMe controllers or AIO or URING devices found 00:28:24.350 Initializing NVMe Controllers 00:28:24.350 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:24.350 Controller IO queue size 128, less than required. 00:28:24.350 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:24.350 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:24.350 Controller IO queue size 128, less than required. 00:28:24.350 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:24.350 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:24.350 WARNING: Some requested NVMe devices were skipped 00:28:24.350 07:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:26.890 Initializing NVMe Controllers 00:28:26.890 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:26.890 Controller IO queue size 128, less than required. 00:28:26.890 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:26.890 Controller IO queue size 128, less than required. 00:28:26.890 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:26.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:26.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:26.890 Initialization complete. Launching workers. 00:28:26.890 00:28:26.890 ==================== 00:28:26.890 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:26.890 TCP transport: 00:28:26.890 polls: 41073 00:28:26.890 idle_polls: 24926 00:28:26.890 sock_completions: 16147 00:28:26.890 nvme_completions: 7135 00:28:26.890 submitted_requests: 10634 00:28:26.890 queued_requests: 1 00:28:26.890 00:28:26.890 ==================== 00:28:26.890 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:26.890 TCP transport: 00:28:26.890 polls: 45994 00:28:26.890 idle_polls: 30299 00:28:26.890 sock_completions: 15695 00:28:26.890 nvme_completions: 7029 00:28:26.890 submitted_requests: 10564 00:28:26.890 queued_requests: 1 00:28:26.890 ======================================================== 00:28:26.890 Latency(us) 00:28:26.890 Device Information : IOPS MiB/s Average min max 00:28:26.890 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1783.49 445.87 72611.95 33020.04 120821.73 00:28:26.890 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1756.99 439.25 73342.13 29312.04 148237.39 00:28:26.890 ======================================================== 00:28:26.890 Total : 3540.48 885.12 72974.31 29312.04 148237.39 00:28:26.890 00:28:26.890 07:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:26.890 07:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:27.150 07:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:28:27.150 07:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:28:27.150 07:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:28:27.150 07:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:27.150 07:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:28:27.150 07:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:27.150 07:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:28:27.151 07:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:27.151 07:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:27.151 rmmod nvme_tcp 00:28:27.151 rmmod nvme_fabrics 00:28:27.151 rmmod nvme_keyring 00:28:27.151 07:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:27.151 07:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:28:27.151 07:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:28:27.151 07:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2494623 ']' 00:28:27.151 07:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2494623 00:28:27.151 07:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2494623 ']' 00:28:27.151 07:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2494623 00:28:27.151 07:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:28:27.151 07:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:27.151 07:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2494623 00:28:27.151 07:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:27.151 07:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:27.151 07:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2494623' 00:28:27.151 killing process with pid 2494623 00:28:27.151 07:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2494623 00:28:27.151 07:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2494623 00:28:29.069 07:23:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:29.069 07:23:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:29.069 07:23:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:29.069 07:23:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:28:29.069 07:23:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:29.069 07:23:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:28:29.069 07:23:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:28:29.069 07:23:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:29.069 07:23:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:29.069 07:23:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:29.069 07:23:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:29.069 07:23:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:31.617 00:28:31.617 real 0m24.265s 00:28:31.617 user 0m58.142s 00:28:31.617 sys 0m8.650s 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:31.617 ************************************ 00:28:31.617 END TEST nvmf_perf 00:28:31.617 ************************************ 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.617 ************************************ 00:28:31.617 START TEST nvmf_fio_host 00:28:31.617 ************************************ 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:31.617 * Looking for test storage... 00:28:31.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:31.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:31.617 --rc genhtml_branch_coverage=1 00:28:31.617 --rc genhtml_function_coverage=1 00:28:31.617 --rc genhtml_legend=1 00:28:31.617 --rc geninfo_all_blocks=1 00:28:31.617 --rc geninfo_unexecuted_blocks=1 00:28:31.617 00:28:31.617 ' 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:31.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:31.617 --rc genhtml_branch_coverage=1 00:28:31.617 --rc genhtml_function_coverage=1 00:28:31.617 --rc genhtml_legend=1 00:28:31.617 --rc geninfo_all_blocks=1 00:28:31.617 --rc geninfo_unexecuted_blocks=1 00:28:31.617 00:28:31.617 ' 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:31.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:31.617 --rc genhtml_branch_coverage=1 00:28:31.617 --rc genhtml_function_coverage=1 00:28:31.617 --rc genhtml_legend=1 00:28:31.617 --rc geninfo_all_blocks=1 00:28:31.617 --rc geninfo_unexecuted_blocks=1 00:28:31.617 00:28:31.617 ' 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:31.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:31.617 --rc genhtml_branch_coverage=1 00:28:31.617 --rc genhtml_function_coverage=1 00:28:31.617 --rc genhtml_legend=1 00:28:31.617 --rc geninfo_all_blocks=1 00:28:31.617 --rc geninfo_unexecuted_blocks=1 00:28:31.617 00:28:31.617 ' 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:31.617 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:31.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:28:31.618 07:23:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:39.770 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:39.770 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:39.770 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:39.770 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.770 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:39.771 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:28:39.771 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:39.771 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:39.771 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:39.771 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:39.771 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:39.771 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:39.771 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:39.771 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:39.771 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:39.771 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:39.771 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:39.771 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:39.771 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:39.771 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:39.771 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:39.771 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:39.771 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:39.771 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:39.771 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:39.771 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:39.771 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:39.771 07:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:39.771 07:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:39.771 07:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:39.771 07:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:39.771 07:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:39.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:39.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:28:39.771 00:28:39.771 --- 10.0.0.2 ping statistics --- 00:28:39.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.771 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:28:39.771 07:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:39.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:39.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:28:39.771 00:28:39.771 --- 10.0.0.1 ping statistics --- 00:28:39.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.771 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:28:39.771 07:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:39.771 07:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:28:39.771 07:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:39.771 07:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:39.771 07:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:39.771 07:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:39.771 07:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:39.771 07:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:39.771 07:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:39.771 07:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:28:39.771 07:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:28:39.771 07:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:39.771 07:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.771 07:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2501688 00:28:39.771 07:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:39.771 07:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:39.771 07:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2501688 00:28:39.771 07:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2501688 ']' 00:28:39.771 07:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.771 07:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:39.771 07:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.771 07:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:39.771 07:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.771 [2024-11-27 07:23:50.226267] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:28:39.771 [2024-11-27 07:23:50.226346] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:39.771 [2024-11-27 07:23:50.327343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:39.771 [2024-11-27 07:23:50.380198] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:39.771 [2024-11-27 07:23:50.380250] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:39.771 [2024-11-27 07:23:50.380259] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:39.771 [2024-11-27 07:23:50.380267] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:39.771 [2024-11-27 07:23:50.380273] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:39.771 [2024-11-27 07:23:50.382856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:39.771 [2024-11-27 07:23:50.382998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:39.771 [2024-11-27 07:23:50.383168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.771 [2024-11-27 07:23:50.383182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:40.033 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:40.033 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:28:40.033 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:40.033 [2024-11-27 07:23:51.212773] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:40.294 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:28:40.294 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:40.294 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.294 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:28:40.294 Malloc1 00:28:40.556 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:40.556 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:40.817 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:41.079 [2024-11-27 07:23:52.089751] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:41.079 07:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:41.340 07:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:41.340 07:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:41.340 07:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:41.340 07:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:28:41.340 07:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:41.340 07:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:28:41.340 07:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:41.340 07:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:28:41.340 07:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:28:41.340 07:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:41.340 07:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:41.340 07:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:28:41.340 07:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:41.340 07:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:28:41.340 07:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:28:41.340 07:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:41.340 07:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:41.340 07:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:28:41.340 07:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:41.340 07:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:28:41.340 07:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:28:41.340 07:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:41.340 07:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:41.602 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:41.602 fio-3.35 00:28:41.602 Starting 1 thread 00:28:44.148 [2024-11-27 07:23:55.088879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc7b0 is same with the state(6) to be set 00:28:44.148 [2024-11-27 07:23:55.088940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc7b0 is same with the state(6) to be set 00:28:44.148 00:28:44.148 test: (groupid=0, jobs=1): err= 0: pid=2502373: Wed Nov 27 07:23:55 2024 00:28:44.148 read: IOPS=13.8k, BW=54.1MiB/s (56.7MB/s)(108MiB/2005msec) 00:28:44.148 slat (usec): min=2, max=294, avg= 2.16, stdev= 2.50 00:28:44.148 clat (usec): min=3613, max=9692, avg=5079.82, stdev=405.01 00:28:44.148 lat (usec): min=3615, max=9698, avg=5081.99, stdev=405.31 00:28:44.148 clat percentiles (usec): 00:28:44.148 | 1.00th=[ 4228], 5.00th=[ 4490], 10.00th=[ 4686], 20.00th=[ 4817], 00:28:44.148 | 30.00th=[ 4883], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5145], 00:28:44.148 | 70.00th=[ 5211], 80.00th=[ 5342], 90.00th=[ 5473], 95.00th=[ 5604], 00:28:44.148 | 99.00th=[ 5932], 99.50th=[ 6980], 99.90th=[ 9110], 99.95th=[ 9372], 00:28:44.148 | 99.99th=[ 9634] 00:28:44.148 bw ( KiB/s): min=53792, max=56016, per=100.00%, avg=55416.00, stdev=1083.46, samples=4 00:28:44.148 iops : min=13448, max=14004, avg=13854.00, stdev=270.87, samples=4 00:28:44.148 write: IOPS=13.9k, BW=54.1MiB/s (56.8MB/s)(109MiB/2005msec); 0 zone resets 00:28:44.148 slat (usec): min=2, max=268, avg= 2.23, stdev= 1.79 00:28:44.148 clat (usec): min=2829, max=8521, avg=4104.49, stdev=371.55 00:28:44.148 lat (usec): min=2832, max=8523, avg=4106.72, stdev=371.89 00:28:44.148 clat percentiles (usec): 00:28:44.148 | 1.00th=[ 3392], 5.00th=[ 3654], 10.00th=[ 3752], 20.00th=[ 3884], 00:28:44.148 | 30.00th=[ 3949], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4146], 00:28:44.148 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4555], 00:28:44.148 | 99.00th=[ 5014], 99.50th=[ 6325], 99.90th=[ 8094], 99.95th=[ 8160], 00:28:44.148 | 99.99th=[ 8291] 00:28:44.148 bw ( KiB/s): min=54112, max=55944, per=100.00%, avg=55428.00, stdev=883.45, samples=4 00:28:44.148 iops : min=13528, max=13986, avg=13857.00, stdev=220.86, samples=4 00:28:44.148 lat (msec) : 4=18.35%, 10=81.65% 00:28:44.148 cpu : usr=78.94%, sys=20.46%, ctx=31, majf=0, minf=16 00:28:44.148 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:28:44.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.148 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:44.148 issued rwts: total=27766,27783,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.148 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:44.148 00:28:44.148 Run status group 0 (all jobs): 00:28:44.148 READ: bw=54.1MiB/s (56.7MB/s), 54.1MiB/s-54.1MiB/s (56.7MB/s-56.7MB/s), io=108MiB (114MB), run=2005-2005msec 00:28:44.148 WRITE: bw=54.1MiB/s (56.8MB/s), 54.1MiB/s-54.1MiB/s (56.8MB/s-56.8MB/s), io=109MiB (114MB), run=2005-2005msec 00:28:44.148 07:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:44.148 07:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:44.148 07:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:28:44.148 07:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:44.148 07:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:28:44.148 07:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:44.148 07:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:28:44.148 07:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:28:44.148 07:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:44.148 07:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:44.148 07:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:28:44.148 07:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:44.148 07:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:28:44.148 07:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:28:44.148 07:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:44.148 07:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:44.148 07:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:28:44.148 07:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:44.148 07:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:28:44.148 07:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:28:44.148 07:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:44.148 07:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:44.411 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:28:44.411 fio-3.35 00:28:44.411 Starting 1 thread 00:28:46.955 00:28:46.955 test: (groupid=0, jobs=1): err= 0: pid=2503046: Wed Nov 27 07:23:57 2024 00:28:46.955 read: IOPS=9394, BW=147MiB/s (154MB/s)(294MiB/2005msec) 00:28:46.955 slat (usec): min=3, max=110, avg= 3.63, stdev= 1.61 00:28:46.955 clat (usec): min=1615, max=17137, avg=8230.24, stdev=1891.82 00:28:46.955 lat (usec): min=1619, max=17141, avg=8233.86, stdev=1891.93 00:28:46.955 clat percentiles (usec): 00:28:46.955 | 1.00th=[ 4293], 5.00th=[ 5342], 10.00th=[ 5800], 20.00th=[ 6521], 00:28:46.955 | 30.00th=[ 7111], 40.00th=[ 7635], 50.00th=[ 8160], 60.00th=[ 8717], 00:28:46.955 | 70.00th=[ 9241], 80.00th=[10028], 90.00th=[10814], 95.00th=[11076], 00:28:46.955 | 99.00th=[12649], 99.50th=[13173], 99.90th=[14615], 99.95th=[15008], 00:28:46.955 | 99.99th=[16581] 00:28:46.955 bw ( KiB/s): min=68000, max=86880, per=49.94%, avg=75064.00, stdev=8212.95, samples=4 00:28:46.955 iops : min= 4250, max= 5430, avg=4691.50, stdev=513.31, samples=4 00:28:46.955 write: IOPS=5643, BW=88.2MiB/s (92.5MB/s)(153MiB/1738msec); 0 zone resets 00:28:46.955 slat (usec): min=39, max=303, avg=40.89, stdev= 7.20 00:28:46.955 clat (usec): min=1683, max=18014, avg=9102.57, stdev=1427.46 00:28:46.955 lat (usec): min=1723, max=18054, avg=9143.46, stdev=1428.87 00:28:46.955 clat percentiles (usec): 00:28:46.955 | 1.00th=[ 6128], 5.00th=[ 7177], 10.00th=[ 7504], 20.00th=[ 7963], 00:28:46.955 | 30.00th=[ 8356], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9372], 00:28:46.955 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10814], 95.00th=[11207], 00:28:46.955 | 99.00th=[13435], 99.50th=[15139], 99.90th=[16909], 99.95th=[17433], 00:28:46.955 | 99.99th=[17957] 00:28:46.956 bw ( KiB/s): min=70720, max=90560, per=86.35%, avg=77976.00, stdev=8672.39, samples=4 00:28:46.956 iops : min= 4420, max= 5660, avg=4873.50, stdev=542.02, samples=4 00:28:46.956 lat (msec) : 2=0.01%, 4=0.45%, 10=77.38%, 20=22.16% 00:28:46.956 cpu : usr=85.63%, sys=12.82%, ctx=13, majf=0, minf=28 00:28:46.956 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:28:46.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:46.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:46.956 issued rwts: total=18836,9809,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:46.956 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:46.956 00:28:46.956 Run status group 0 (all jobs): 00:28:46.956 READ: bw=147MiB/s (154MB/s), 147MiB/s-147MiB/s (154MB/s-154MB/s), io=294MiB (309MB), run=2005-2005msec 00:28:46.956 WRITE: bw=88.2MiB/s (92.5MB/s), 88.2MiB/s-88.2MiB/s (92.5MB/s-92.5MB/s), io=153MiB (161MB), run=1738-1738msec 00:28:46.956 07:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:46.956 07:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:28:46.956 07:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:46.956 07:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:28:46.956 07:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:28:46.956 07:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:46.956 07:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:28:46.956 07:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:46.956 07:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:28:46.956 07:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:46.956 07:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:46.956 rmmod nvme_tcp 00:28:46.956 rmmod nvme_fabrics 00:28:46.956 rmmod nvme_keyring 00:28:46.956 07:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:46.956 07:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:28:46.956 07:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:28:46.956 07:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2501688 ']' 00:28:46.956 07:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2501688 00:28:46.956 07:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2501688 ']' 00:28:46.956 07:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2501688 00:28:46.956 07:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:28:46.956 07:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:46.956 07:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2501688 00:28:46.956 07:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:46.956 07:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:46.956 07:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2501688' 00:28:46.956 killing process with pid 2501688 00:28:46.956 07:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2501688 00:28:46.956 07:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2501688 00:28:47.217 07:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:47.217 07:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:47.217 07:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:47.217 07:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:28:47.217 07:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:28:47.217 07:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:28:47.217 07:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:47.217 07:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:47.217 07:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:47.217 07:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:47.217 07:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:47.217 07:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.136 07:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:49.136 00:28:49.136 real 0m17.921s 00:28:49.136 user 1m0.325s 00:28:49.136 sys 0m7.681s 00:28:49.136 07:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:49.136 07:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.136 ************************************ 00:28:49.136 END TEST nvmf_fio_host 00:28:49.136 ************************************ 00:28:49.398 07:24:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:49.398 07:24:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:49.398 07:24:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:49.398 07:24:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.398 ************************************ 00:28:49.398 START TEST nvmf_failover 00:28:49.398 ************************************ 00:28:49.398 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:49.398 * Looking for test storage... 00:28:49.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:49.398 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:49.398 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:28:49.398 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:49.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.660 --rc genhtml_branch_coverage=1 00:28:49.660 --rc genhtml_function_coverage=1 00:28:49.660 --rc genhtml_legend=1 00:28:49.660 --rc geninfo_all_blocks=1 00:28:49.660 --rc geninfo_unexecuted_blocks=1 00:28:49.660 00:28:49.660 ' 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:49.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.660 --rc genhtml_branch_coverage=1 00:28:49.660 --rc genhtml_function_coverage=1 00:28:49.660 --rc genhtml_legend=1 00:28:49.660 --rc geninfo_all_blocks=1 00:28:49.660 --rc geninfo_unexecuted_blocks=1 00:28:49.660 00:28:49.660 ' 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:49.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.660 --rc genhtml_branch_coverage=1 00:28:49.660 --rc genhtml_function_coverage=1 00:28:49.660 --rc genhtml_legend=1 00:28:49.660 --rc geninfo_all_blocks=1 00:28:49.660 --rc geninfo_unexecuted_blocks=1 00:28:49.660 00:28:49.660 ' 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:49.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.660 --rc genhtml_branch_coverage=1 00:28:49.660 --rc genhtml_function_coverage=1 00:28:49.660 --rc genhtml_legend=1 00:28:49.660 --rc geninfo_all_blocks=1 00:28:49.660 --rc geninfo_unexecuted_blocks=1 00:28:49.660 00:28:49.660 ' 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:49.660 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:49.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:28:49.661 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:57.809 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:57.809 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:57.809 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:57.809 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:57.809 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:57.809 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:57.809 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:57.809 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:57.809 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:57.809 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:57.810 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:57.810 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:57.810 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:57.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:57.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:28:57.810 00:28:57.810 --- 10.0.0.2 ping statistics --- 00:28:57.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:57.810 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:28:57.810 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:57.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:57.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:28:57.810 00:28:57.810 --- 10.0.0.1 ping statistics --- 00:28:57.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:57.810 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:28:57.810 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:57.810 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:28:57.810 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:57.810 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:57.810 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:57.810 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:57.810 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:57.810 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:57.810 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:57.810 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:28:57.810 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:57.810 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:57.810 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:57.810 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2507809 00:28:57.810 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2507809 00:28:57.810 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:57.810 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2507809 ']' 00:28:57.810 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:57.810 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:57.810 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:57.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:57.810 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:57.810 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:57.810 [2024-11-27 07:24:08.298267] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:28:57.810 [2024-11-27 07:24:08.298335] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:57.810 [2024-11-27 07:24:08.399547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:57.810 [2024-11-27 07:24:08.450130] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:57.810 [2024-11-27 07:24:08.450194] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:57.810 [2024-11-27 07:24:08.450203] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:57.810 [2024-11-27 07:24:08.450211] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:57.810 [2024-11-27 07:24:08.450217] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:57.810 [2024-11-27 07:24:08.452026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:57.810 [2024-11-27 07:24:08.452207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:57.810 [2024-11-27 07:24:08.452257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:58.073 07:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:58.073 07:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:28:58.073 07:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:58.074 07:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:58.074 07:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:58.074 07:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:58.074 07:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:58.335 [2024-11-27 07:24:09.329413] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:58.335 07:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:58.597 Malloc0 00:28:58.597 07:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:58.597 07:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:58.859 07:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:59.121 [2024-11-27 07:24:10.158069] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:59.121 07:24:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:59.383 [2024-11-27 07:24:10.358700] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:59.383 07:24:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:59.383 [2024-11-27 07:24:10.559400] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:59.643 07:24:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:28:59.643 07:24:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2508545 00:28:59.643 07:24:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:59.643 07:24:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2508545 /var/tmp/bdevperf.sock 00:28:59.643 07:24:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2508545 ']' 00:28:59.643 07:24:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:59.643 07:24:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:59.643 07:24:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:59.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:59.643 07:24:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:59.643 07:24:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:00.680 07:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:00.680 07:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:29:00.680 07:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:29:00.680 NVMe0n1 00:29:00.680 07:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:29:00.983 00:29:00.983 07:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2508980 00:29:00.983 07:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:00.983 07:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:29:01.924 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:02.186 [2024-11-27 07:24:13.264024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.186 [2024-11-27 07:24:13.264386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.187 [2024-11-27 07:24:13.264390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.187 [2024-11-27 07:24:13.264395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.187 [2024-11-27 07:24:13.264400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.187 [2024-11-27 07:24:13.264409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.187 [2024-11-27 07:24:13.264413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.187 [2024-11-27 07:24:13.264418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.187 [2024-11-27 07:24:13.264423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.187 [2024-11-27 07:24:13.264427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.187 [2024-11-27 07:24:13.264432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.187 [2024-11-27 07:24:13.264436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.187 [2024-11-27 07:24:13.264441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.187 [2024-11-27 07:24:13.264449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.187 [2024-11-27 07:24:13.264454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.187 [2024-11-27 07:24:13.264458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.187 [2024-11-27 07:24:13.264463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.187 [2024-11-27 07:24:13.264467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.187 [2024-11-27 07:24:13.264472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.187 [2024-11-27 07:24:13.264476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70210 is same with the state(6) to be set 00:29:02.187 07:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:29:05.489 07:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:29:05.489 00:29:05.489 07:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:05.751 [2024-11-27 07:24:16.842366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.751 [2024-11-27 07:24:16.842601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.752 [2024-11-27 07:24:16.842606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.752 [2024-11-27 07:24:16.842611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.752 [2024-11-27 07:24:16.842615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.752 [2024-11-27 07:24:16.842620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.752 [2024-11-27 07:24:16.842624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.752 [2024-11-27 07:24:16.842628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.752 [2024-11-27 07:24:16.842634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.752 [2024-11-27 07:24:16.842638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.752 [2024-11-27 07:24:16.842642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.752 [2024-11-27 07:24:16.842647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.752 [2024-11-27 07:24:16.842651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.752 [2024-11-27 07:24:16.842657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.752 [2024-11-27 07:24:16.842662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.752 [2024-11-27 07:24:16.842668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.752 [2024-11-27 07:24:16.842673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.752 [2024-11-27 07:24:16.842677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.752 [2024-11-27 07:24:16.842682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.752 [2024-11-27 07:24:16.842687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.752 [2024-11-27 07:24:16.842691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.752 [2024-11-27 07:24:16.842696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.752 [2024-11-27 07:24:16.842701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.752 [2024-11-27 07:24:16.842705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.752 [2024-11-27 07:24:16.842710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70cc0 is same with the state(6) to be set 00:29:05.752 07:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:29:09.050 07:24:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:09.050 [2024-11-27 07:24:20.032540] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:09.050 07:24:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:29:09.992 07:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:10.254 [2024-11-27 07:24:21.225461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 [2024-11-27 07:24:21.225788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe36480 is same with the state(6) to be set 00:29:10.254 07:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2508980 00:29:16.849 { 00:29:16.849 "results": [ 00:29:16.849 { 00:29:16.849 "job": "NVMe0n1", 00:29:16.849 "core_mask": "0x1", 00:29:16.849 "workload": "verify", 00:29:16.849 "status": "finished", 00:29:16.849 "verify_range": { 00:29:16.849 "start": 0, 00:29:16.849 "length": 16384 00:29:16.849 }, 00:29:16.849 "queue_depth": 128, 00:29:16.849 "io_size": 4096, 00:29:16.849 "runtime": 15.005149, 00:29:16.849 "iops": 12430.866231318329, 00:29:16.849 "mibps": 48.55807121608722, 00:29:16.849 "io_failed": 7797, 00:29:16.849 "io_timeout": 0, 00:29:16.849 "avg_latency_us": 9862.705089369645, 00:29:16.849 "min_latency_us": 375.46666666666664, 00:29:16.849 "max_latency_us": 35389.44 00:29:16.849 } 00:29:16.849 ], 00:29:16.849 "core_count": 1 00:29:16.849 } 00:29:16.849 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2508545 00:29:16.849 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2508545 ']' 00:29:16.849 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2508545 00:29:16.849 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:29:16.849 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:16.849 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2508545 00:29:16.849 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:16.849 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:16.849 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2508545' 00:29:16.849 killing process with pid 2508545 00:29:16.849 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2508545 00:29:16.849 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2508545 00:29:16.849 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:16.849 [2024-11-27 07:24:10.639909] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:29:16.849 [2024-11-27 07:24:10.639995] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2508545 ] 00:29:16.849 [2024-11-27 07:24:10.737344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.849 [2024-11-27 07:24:10.781017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:16.849 Running I/O for 15 seconds... 00:29:16.849 10131.00 IOPS, 39.57 MiB/s [2024-11-27T06:24:28.054Z] [2024-11-27 07:24:13.265426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:87744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.849 [2024-11-27 07:24:13.265460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.849 [2024-11-27 07:24:13.265476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:87752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.849 [2024-11-27 07:24:13.265485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.849 [2024-11-27 07:24:13.265495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.849 [2024-11-27 07:24:13.265503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.849 [2024-11-27 07:24:13.265512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:87768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.849 [2024-11-27 07:24:13.265520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.849 [2024-11-27 07:24:13.265529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:87776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.849 [2024-11-27 07:24:13.265537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.849 [2024-11-27 07:24:13.265547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:87784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.849 [2024-11-27 07:24:13.265555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.849 [2024-11-27 07:24:13.265564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:87792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.849 [2024-11-27 07:24:13.265571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.849 [2024-11-27 07:24:13.265581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:87800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.849 [2024-11-27 07:24:13.265588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.849 [2024-11-27 07:24:13.265598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.849 [2024-11-27 07:24:13.265605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.849 [2024-11-27 07:24:13.265615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:87816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.265622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.265631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:87824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.265639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.265653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:87832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.265661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.265671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:87840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.265678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.265687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:87848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.265694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.265704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:87856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.265711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.265721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:87864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.265728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.265737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:87872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.265745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.265754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:87880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.265762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.265771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:87888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.265778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.265787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:87896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.265795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.265804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:87904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.265811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.265822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:87912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.265829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.265839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:87920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.265846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.265855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:87928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.265864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.265874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.265881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.265891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:87944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.265898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.265907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:87952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.265914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.265924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:87960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.265931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.265940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:87968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.265947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.265956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:87976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.265963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.265973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:87984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.265980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.265989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.265996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.266006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:88000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.266013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.266023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:88008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.266030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.266039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:88016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.266046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.266056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.266063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.266074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:88032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.266081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.266090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:88040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.266098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.266107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:88048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.266114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.266123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:88056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.266131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.266140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:88064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.266147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.266156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.266168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.266177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:88080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.266185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.266194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:88088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.266201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.266210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.266217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.266227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:88104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.266234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.266244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:88112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.266251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.266260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:88120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.266267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.850 [2024-11-27 07:24:13.266277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:88128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.850 [2024-11-27 07:24:13.266289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:88136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.851 [2024-11-27 07:24:13.266305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:88144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.851 [2024-11-27 07:24:13.266322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:88152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.851 [2024-11-27 07:24:13.266338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:88160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.851 [2024-11-27 07:24:13.266355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:88168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.851 [2024-11-27 07:24:13.266371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:88176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.851 [2024-11-27 07:24:13.266388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:88184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.851 [2024-11-27 07:24:13.266404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:88192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.851 [2024-11-27 07:24:13.266420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:88200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.851 [2024-11-27 07:24:13.266437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:88208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.851 [2024-11-27 07:24:13.266453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.851 [2024-11-27 07:24:13.266470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.851 [2024-11-27 07:24:13.266487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:88232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.851 [2024-11-27 07:24:13.266505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.851 [2024-11-27 07:24:13.266521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:88248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.851 [2024-11-27 07:24:13.266537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.851 [2024-11-27 07:24:13.266554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.851 [2024-11-27 07:24:13.266571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.851 [2024-11-27 07:24:13.266587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:88280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.851 [2024-11-27 07:24:13.266603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.851 [2024-11-27 07:24:13.266620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:88296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.851 [2024-11-27 07:24:13.266637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:88304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.851 [2024-11-27 07:24:13.266653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:88312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.851 [2024-11-27 07:24:13.266669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:88320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.851 [2024-11-27 07:24:13.266685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.851 [2024-11-27 07:24:13.266702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.851 [2024-11-27 07:24:13.266720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.851 [2024-11-27 07:24:13.266736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:88352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.851 [2024-11-27 07:24:13.266753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:88360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.851 [2024-11-27 07:24:13.266769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.851 [2024-11-27 07:24:13.266785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:88376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.851 [2024-11-27 07:24:13.266801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:88384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.851 [2024-11-27 07:24:13.266817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:88392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.851 [2024-11-27 07:24:13.266834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:88400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.851 [2024-11-27 07:24:13.266850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.851 [2024-11-27 07:24:13.266866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.851 [2024-11-27 07:24:13.266882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:88424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.851 [2024-11-27 07:24:13.266898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.851 [2024-11-27 07:24:13.266916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.851 [2024-11-27 07:24:13.266925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:88440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.851 [2024-11-27 07:24:13.266932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.266941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.852 [2024-11-27 07:24:13.266948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.266957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:88456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.852 [2024-11-27 07:24:13.266964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.266973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:88464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.852 [2024-11-27 07:24:13.266980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.266989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.852 [2024-11-27 07:24:13.266996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.267006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.852 [2024-11-27 07:24:13.267013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.267022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.852 [2024-11-27 07:24:13.267029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.267039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:88496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.852 [2024-11-27 07:24:13.267046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.267055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:88504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.852 [2024-11-27 07:24:13.267062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.267071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.852 [2024-11-27 07:24:13.267079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.267089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.852 [2024-11-27 07:24:13.267096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.267105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:88528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.852 [2024-11-27 07:24:13.267112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.267123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:88536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.852 [2024-11-27 07:24:13.267130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.267139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.852 [2024-11-27 07:24:13.267147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.267156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:88552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.852 [2024-11-27 07:24:13.267167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.267176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.852 [2024-11-27 07:24:13.267183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.267193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:88568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.852 [2024-11-27 07:24:13.267200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.267209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.852 [2024-11-27 07:24:13.267217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.267226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:88584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.852 [2024-11-27 07:24:13.267233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.267242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.852 [2024-11-27 07:24:13.267249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.267258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.852 [2024-11-27 07:24:13.267265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.267274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:88608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.852 [2024-11-27 07:24:13.267282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.267291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:88616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.852 [2024-11-27 07:24:13.267298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.267307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:88624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.852 [2024-11-27 07:24:13.267314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.267323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:88632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.852 [2024-11-27 07:24:13.267330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.267341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.852 [2024-11-27 07:24:13.267348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.267358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.852 [2024-11-27 07:24:13.267365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.267374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:88656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.852 [2024-11-27 07:24:13.267381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.267390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.852 [2024-11-27 07:24:13.267397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.267407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.852 [2024-11-27 07:24:13.267414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.267423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:88680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.852 [2024-11-27 07:24:13.267430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.267439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:88688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.852 [2024-11-27 07:24:13.267446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.267455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.852 [2024-11-27 07:24:13.267462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.267471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:88704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.852 [2024-11-27 07:24:13.267478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.267488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:88712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.852 [2024-11-27 07:24:13.267495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.267504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:88720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.852 [2024-11-27 07:24:13.267511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.267520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.852 [2024-11-27 07:24:13.267527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.267536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.852 [2024-11-27 07:24:13.267545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.267554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:88744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.852 [2024-11-27 07:24:13.267561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.852 [2024-11-27 07:24:13.267571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:88752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.853 [2024-11-27 07:24:13.267578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.853 [2024-11-27 07:24:13.267600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.853 [2024-11-27 07:24:13.267607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.853 [2024-11-27 07:24:13.267614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88760 len:8 PRP1 0x0 PRP2 0x0 00:29:16.853 [2024-11-27 07:24:13.267621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.853 [2024-11-27 07:24:13.267662] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:16.853 [2024-11-27 07:24:13.267684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.853 [2024-11-27 07:24:13.267692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.853 [2024-11-27 07:24:13.267701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.853 [2024-11-27 07:24:13.267709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.853 [2024-11-27 07:24:13.267717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.853 [2024-11-27 07:24:13.267724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.853 [2024-11-27 07:24:13.267732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.853 [2024-11-27 07:24:13.267739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.853 [2024-11-27 07:24:13.267754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:16.853 [2024-11-27 07:24:13.267790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa2da0 (9): Bad file descriptor 00:29:16.853 [2024-11-27 07:24:13.271384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:16.853 [2024-11-27 07:24:13.391166] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:29:16.853 9977.00 IOPS, 38.97 MiB/s [2024-11-27T06:24:28.058Z] 10393.67 IOPS, 40.60 MiB/s [2024-11-27T06:24:28.058Z] 10874.25 IOPS, 42.48 MiB/s [2024-11-27T06:24:28.058Z] [2024-11-27 07:24:16.844062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:58176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.853 [2024-11-27 07:24:16.844093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.853 [2024-11-27 07:24:16.844105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.853 [2024-11-27 07:24:16.844111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.853 [2024-11-27 07:24:16.844122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:58192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.853 [2024-11-27 07:24:16.844128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.853 [2024-11-27 07:24:16.844135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:58256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.853 [2024-11-27 07:24:16.844140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.853 [2024-11-27 07:24:16.844146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.853 [2024-11-27 07:24:16.844151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.853 [2024-11-27 07:24:16.844162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:58272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.853 [2024-11-27 07:24:16.844167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.853 [2024-11-27 07:24:16.844174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.853 [2024-11-27 07:24:16.844179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.853 [2024-11-27 07:24:16.844185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.853 [2024-11-27 07:24:16.844190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.853 [2024-11-27 07:24:16.844197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.853 [2024-11-27 07:24:16.844202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.853 [2024-11-27 07:24:16.844208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.853 [2024-11-27 07:24:16.844213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.853 [2024-11-27 07:24:16.844220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.853 [2024-11-27 07:24:16.844225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.853 [2024-11-27 07:24:16.844231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.853 [2024-11-27 07:24:16.844236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.853 [2024-11-27 07:24:16.844242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.853 [2024-11-27 07:24:16.844247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.853 [2024-11-27 07:24:16.844254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:58336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.853 [2024-11-27 07:24:16.844259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.853 [2024-11-27 07:24:16.844265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.853 [2024-11-27 07:24:16.844272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.853 [2024-11-27 07:24:16.844279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:58352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.853 [2024-11-27 07:24:16.844284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.853 [2024-11-27 07:24:16.844290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.853 [2024-11-27 07:24:16.844295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.853 [2024-11-27 07:24:16.844302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.853 [2024-11-27 07:24:16.844307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.853 [2024-11-27 07:24:16.844313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:58376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.853 [2024-11-27 07:24:16.844318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.853 [2024-11-27 07:24:16.844324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:58384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.853 [2024-11-27 07:24:16.844329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.853 [2024-11-27 07:24:16.844336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.853 [2024-11-27 07:24:16.844341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.853 [2024-11-27 07:24:16.844347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:58400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.853 [2024-11-27 07:24:16.844352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.853 [2024-11-27 07:24:16.844358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:58408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.853 [2024-11-27 07:24:16.844363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.853 [2024-11-27 07:24:16.844370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:58416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.853 [2024-11-27 07:24:16.844375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.853 [2024-11-27 07:24:16.844381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:58424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.853 [2024-11-27 07:24:16.844386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.853 [2024-11-27 07:24:16.844393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.853 [2024-11-27 07:24:16.844398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.853 [2024-11-27 07:24:16.844404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:58440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.853 [2024-11-27 07:24:16.844409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.853 [2024-11-27 07:24:16.844417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.853 [2024-11-27 07:24:16.844422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.853 [2024-11-27 07:24:16.844428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.854 [2024-11-27 07:24:16.844433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.854 [2024-11-27 07:24:16.844440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.854 [2024-11-27 07:24:16.844445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.854 [2024-11-27 07:24:16.844451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.854 [2024-11-27 07:24:16.844456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.854 [2024-11-27 07:24:16.844462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.854 [2024-11-27 07:24:16.844467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.854 [2024-11-27 07:24:16.844473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:58488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.854 [2024-11-27 07:24:16.844480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.854 [2024-11-27 07:24:16.844486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:58496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.854 [2024-11-27 07:24:16.844491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.854 [2024-11-27 07:24:16.844498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:58504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.854 [2024-11-27 07:24:16.844503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.854 [2024-11-27 07:24:16.844509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:58512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.854 [2024-11-27 07:24:16.844514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.854 [2024-11-27 07:24:16.844520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:58520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.854 [2024-11-27 07:24:16.844525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.854 [2024-11-27 07:24:16.844532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:58528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.854 [2024-11-27 07:24:16.844538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.854 [2024-11-27 07:24:16.844544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:58536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.854 [2024-11-27 07:24:16.844549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.854 [2024-11-27 07:24:16.844555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:58544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.854 [2024-11-27 07:24:16.844560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.854 [2024-11-27 07:24:16.844567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.854 [2024-11-27 07:24:16.844572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.854 [2024-11-27 07:24:16.844579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:58560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.854 [2024-11-27 07:24:16.844584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.854 [2024-11-27 07:24:16.844590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.854 [2024-11-27 07:24:16.844595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.854 [2024-11-27 07:24:16.844602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:58576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.854 [2024-11-27 07:24:16.844607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.854 [2024-11-27 07:24:16.844613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.854 [2024-11-27 07:24:16.844618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.854 [2024-11-27 07:24:16.844625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.854 [2024-11-27 07:24:16.844629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.854 [2024-11-27 07:24:16.844636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:58600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.854 [2024-11-27 07:24:16.844642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.854 [2024-11-27 07:24:16.844648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.854 [2024-11-27 07:24:16.844653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.854 [2024-11-27 07:24:16.844660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.854 [2024-11-27 07:24:16.844664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.854 [2024-11-27 07:24:16.844671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:58624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.854 [2024-11-27 07:24:16.844676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.854 [2024-11-27 07:24:16.844682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:58632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.854 [2024-11-27 07:24:16.844687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.854 [2024-11-27 07:24:16.844693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:58640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.854 [2024-11-27 07:24:16.844698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.854 [2024-11-27 07:24:16.844705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:58648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.854 [2024-11-27 07:24:16.844711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.854 [2024-11-27 07:24:16.844717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:58656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.854 [2024-11-27 07:24:16.844722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.854 [2024-11-27 07:24:16.844728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:58664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.854 [2024-11-27 07:24:16.844733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.854 [2024-11-27 07:24:16.844740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.854 [2024-11-27 07:24:16.844745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.854 [2024-11-27 07:24:16.844751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.854 [2024-11-27 07:24:16.844756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.854 [2024-11-27 07:24:16.844762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.854 [2024-11-27 07:24:16.844767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.854 [2024-11-27 07:24:16.844773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:58208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.855 [2024-11-27 07:24:16.844778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.855 [2024-11-27 07:24:16.844784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:58216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.855 [2024-11-27 07:24:16.844789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.855 [2024-11-27 07:24:16.844796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:58224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.855 [2024-11-27 07:24:16.844801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.855 [2024-11-27 07:24:16.844807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:58232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.855 [2024-11-27 07:24:16.844812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.855 [2024-11-27 07:24:16.844818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:58240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.855 [2024-11-27 07:24:16.844823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.855 [2024-11-27 07:24:16.844830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:58248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.855 [2024-11-27 07:24:16.844834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.855 [2024-11-27 07:24:16.844841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:58688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.855 [2024-11-27 07:24:16.844847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.855 [2024-11-27 07:24:16.844854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.855 [2024-11-27 07:24:16.844859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.855 [2024-11-27 07:24:16.844866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.855 [2024-11-27 07:24:16.844871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.855 [2024-11-27 07:24:16.844877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:58712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.855 [2024-11-27 07:24:16.844882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.855 [2024-11-27 07:24:16.844889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:58720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.855 [2024-11-27 07:24:16.844894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.855 [2024-11-27 07:24:16.844900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:58728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.855 [2024-11-27 07:24:16.844905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.855 [2024-11-27 07:24:16.844912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.855 [2024-11-27 07:24:16.844916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.855 [2024-11-27 07:24:16.844923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.855 [2024-11-27 07:24:16.844928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.855 [2024-11-27 07:24:16.844934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.855 [2024-11-27 07:24:16.844939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.855 [2024-11-27 07:24:16.844946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:58760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.855 [2024-11-27 07:24:16.844951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.855 [2024-11-27 07:24:16.844957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:58768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.855 [2024-11-27 07:24:16.844962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.855 [2024-11-27 07:24:16.844969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:58776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.855 [2024-11-27 07:24:16.844973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.855 [2024-11-27 07:24:16.844980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:58784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.855 [2024-11-27 07:24:16.844984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.855 [2024-11-27 07:24:16.844991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:58792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.855 [2024-11-27 07:24:16.844997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.855 [2024-11-27 07:24:16.845004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:58800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.855 [2024-11-27 07:24:16.845009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.855 [2024-11-27 07:24:16.845015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:58808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.855 [2024-11-27 07:24:16.845020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.855 [2024-11-27 07:24:16.845026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:58816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.855 [2024-11-27 07:24:16.845031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.855 [2024-11-27 07:24:16.845037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:58824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.855 [2024-11-27 07:24:16.845042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.855 [2024-11-27 07:24:16.845049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.855 [2024-11-27 07:24:16.845054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.855 [2024-11-27 07:24:16.845060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.855 [2024-11-27 07:24:16.845065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.855 [2024-11-27 07:24:16.845072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:58848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.855 [2024-11-27 07:24:16.845076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.855 [2024-11-27 07:24:16.845083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:58856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.855 [2024-11-27 07:24:16.845088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.855 [2024-11-27 07:24:16.845094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:58864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.855 [2024-11-27 07:24:16.845099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.855 [2024-11-27 07:24:16.845105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:58872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.855 [2024-11-27 07:24:16.845110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.855 [2024-11-27 07:24:16.845116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:58880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.855 [2024-11-27 07:24:16.845121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.855 [2024-11-27 07:24:16.845127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:58888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.855 [2024-11-27 07:24:16.845132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.855 [2024-11-27 07:24:16.845138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.855 [2024-11-27 07:24:16.845144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.855 [2024-11-27 07:24:16.845150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:58904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.855 [2024-11-27 07:24:16.845155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.855 [2024-11-27 07:24:16.845172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:58912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.855 [2024-11-27 07:24:16.845177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.855 [2024-11-27 07:24:16.845183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:58920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.855 [2024-11-27 07:24:16.845188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.855 [2024-11-27 07:24:16.845194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:58928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.856 [2024-11-27 07:24:16.845199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.856 [2024-11-27 07:24:16.845205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.856 [2024-11-27 07:24:16.845210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.856 [2024-11-27 07:24:16.845225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.856 [2024-11-27 07:24:16.845232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58944 len:8 PRP1 0x0 PRP2 0x0 00:29:16.856 [2024-11-27 07:24:16.845237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.856 [2024-11-27 07:24:16.845246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.856 [2024-11-27 07:24:16.845250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.856 [2024-11-27 07:24:16.845254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58952 len:8 PRP1 0x0 PRP2 0x0 00:29:16.856 [2024-11-27 07:24:16.845259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.856 [2024-11-27 07:24:16.845264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.856 [2024-11-27 07:24:16.845268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.856 [2024-11-27 07:24:16.845273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58960 len:8 PRP1 0x0 PRP2 0x0 00:29:16.856 [2024-11-27 07:24:16.845278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.856 [2024-11-27 07:24:16.845284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.856 [2024-11-27 07:24:16.845287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.856 [2024-11-27 07:24:16.845292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58968 len:8 PRP1 0x0 PRP2 0x0 00:29:16.856 [2024-11-27 07:24:16.845297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.856 [2024-11-27 07:24:16.845302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.856 [2024-11-27 07:24:16.845306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.856 [2024-11-27 07:24:16.845311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58976 len:8 PRP1 0x0 PRP2 0x0 00:29:16.856 [2024-11-27 07:24:16.845316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.856 [2024-11-27 07:24:16.845321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.856 [2024-11-27 07:24:16.845325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.856 [2024-11-27 07:24:16.845330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58984 len:8 PRP1 0x0 PRP2 0x0 00:29:16.856 [2024-11-27 07:24:16.845334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.856 [2024-11-27 07:24:16.845340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.856 [2024-11-27 07:24:16.845344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.856 [2024-11-27 07:24:16.845348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58992 len:8 PRP1 0x0 PRP2 0x0 00:29:16.856 [2024-11-27 07:24:16.845353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.856 [2024-11-27 07:24:16.845358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.856 [2024-11-27 07:24:16.845362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.856 [2024-11-27 07:24:16.845366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59000 len:8 PRP1 0x0 PRP2 0x0 00:29:16.856 [2024-11-27 07:24:16.845371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.856 [2024-11-27 07:24:16.845376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.856 [2024-11-27 07:24:16.845380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.856 [2024-11-27 07:24:16.845384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59008 len:8 PRP1 0x0 PRP2 0x0 00:29:16.856 [2024-11-27 07:24:16.845389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.856 [2024-11-27 07:24:16.845395] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.856 [2024-11-27 07:24:16.845398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.856 [2024-11-27 07:24:16.845403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59016 len:8 PRP1 0x0 PRP2 0x0 00:29:16.856 [2024-11-27 07:24:16.845407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.856 [2024-11-27 07:24:16.845412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.856 [2024-11-27 07:24:16.845416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.856 [2024-11-27 07:24:16.845420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59024 len:8 PRP1 0x0 PRP2 0x0 00:29:16.856 [2024-11-27 07:24:16.845425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.856 [2024-11-27 07:24:16.845431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.856 [2024-11-27 07:24:16.845434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.856 [2024-11-27 07:24:16.845439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59032 len:8 PRP1 0x0 PRP2 0x0 00:29:16.856 [2024-11-27 07:24:16.845443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.856 [2024-11-27 07:24:16.845449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.856 [2024-11-27 07:24:16.845454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.856 [2024-11-27 07:24:16.845458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59040 len:8 PRP1 0x0 PRP2 0x0 00:29:16.856 [2024-11-27 07:24:16.845463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.856 [2024-11-27 07:24:16.845468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.856 [2024-11-27 07:24:16.845471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.856 [2024-11-27 07:24:16.845476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59048 len:8 PRP1 0x0 PRP2 0x0 00:29:16.856 [2024-11-27 07:24:16.845481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.856 [2024-11-27 07:24:16.845486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.856 [2024-11-27 07:24:16.845490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.856 [2024-11-27 07:24:16.845494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59056 len:8 PRP1 0x0 PRP2 0x0 00:29:16.856 [2024-11-27 07:24:16.845498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.856 [2024-11-27 07:24:16.845504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.856 [2024-11-27 07:24:16.845507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.856 [2024-11-27 07:24:16.845511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59064 len:8 PRP1 0x0 PRP2 0x0 00:29:16.856 [2024-11-27 07:24:16.845516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.856 [2024-11-27 07:24:16.845521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.856 [2024-11-27 07:24:16.845525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.856 [2024-11-27 07:24:16.845530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59072 len:8 PRP1 0x0 PRP2 0x0 00:29:16.856 [2024-11-27 07:24:16.845535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.856 [2024-11-27 07:24:16.845540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.856 [2024-11-27 07:24:16.845544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.856 [2024-11-27 07:24:16.845548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59080 len:8 PRP1 0x0 PRP2 0x0 00:29:16.856 [2024-11-27 07:24:16.845553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.856 [2024-11-27 07:24:16.845558] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.856 [2024-11-27 07:24:16.845562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.856 [2024-11-27 07:24:16.845566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59088 len:8 PRP1 0x0 PRP2 0x0 00:29:16.856 [2024-11-27 07:24:16.845571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.856 [2024-11-27 07:24:16.845576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.856 [2024-11-27 07:24:16.845580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.856 [2024-11-27 07:24:16.845584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59096 len:8 PRP1 0x0 PRP2 0x0 00:29:16.856 [2024-11-27 07:24:16.845589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.856 [2024-11-27 07:24:16.845596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.856 [2024-11-27 07:24:16.845600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.856 [2024-11-27 07:24:16.845604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59104 len:8 PRP1 0x0 PRP2 0x0 00:29:16.856 [2024-11-27 07:24:16.845609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.856 [2024-11-27 07:24:16.845615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.856 [2024-11-27 07:24:16.845623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.856 [2024-11-27 07:24:16.845627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59112 len:8 PRP1 0x0 PRP2 0x0 00:29:16.856 [2024-11-27 07:24:16.845632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.856 [2024-11-27 07:24:16.857988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.856 [2024-11-27 07:24:16.858010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.857 [2024-11-27 07:24:16.858018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59120 len:8 PRP1 0x0 PRP2 0x0 00:29:16.857 [2024-11-27 07:24:16.858025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.857 [2024-11-27 07:24:16.858032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.857 [2024-11-27 07:24:16.858036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.857 [2024-11-27 07:24:16.858040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59128 len:8 PRP1 0x0 PRP2 0x0 00:29:16.857 [2024-11-27 07:24:16.858045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.857 [2024-11-27 07:24:16.858050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.857 [2024-11-27 07:24:16.858054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.857 [2024-11-27 07:24:16.858058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59136 len:8 PRP1 0x0 PRP2 0x0 00:29:16.857 [2024-11-27 07:24:16.858063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.857 [2024-11-27 07:24:16.858069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.857 [2024-11-27 07:24:16.858072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.857 [2024-11-27 07:24:16.858076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59144 len:8 PRP1 0x0 PRP2 0x0 00:29:16.857 [2024-11-27 07:24:16.858081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.857 [2024-11-27 07:24:16.858087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.857 [2024-11-27 07:24:16.858090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.857 [2024-11-27 07:24:16.858095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59152 len:8 PRP1 0x0 PRP2 0x0 00:29:16.857 [2024-11-27 07:24:16.858099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.857 [2024-11-27 07:24:16.858105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.857 [2024-11-27 07:24:16.858108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.857 [2024-11-27 07:24:16.858113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59160 len:8 PRP1 0x0 PRP2 0x0 00:29:16.857 [2024-11-27 07:24:16.858122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.857 [2024-11-27 07:24:16.858127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.857 [2024-11-27 07:24:16.858131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.857 [2024-11-27 07:24:16.858135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59168 len:8 PRP1 0x0 PRP2 0x0 00:29:16.857 [2024-11-27 07:24:16.858140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.857 [2024-11-27 07:24:16.858145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.857 [2024-11-27 07:24:16.858149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.857 [2024-11-27 07:24:16.858153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59176 len:8 PRP1 0x0 PRP2 0x0 00:29:16.857 [2024-11-27 07:24:16.858163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.857 [2024-11-27 07:24:16.858168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.857 [2024-11-27 07:24:16.858172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.857 [2024-11-27 07:24:16.858176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59184 len:8 PRP1 0x0 PRP2 0x0 00:29:16.857 [2024-11-27 07:24:16.858181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.857 [2024-11-27 07:24:16.858186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.857 [2024-11-27 07:24:16.858190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.857 [2024-11-27 07:24:16.858194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59192 len:8 PRP1 0x0 PRP2 0x0 00:29:16.857 [2024-11-27 07:24:16.858199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.857 [2024-11-27 07:24:16.858234] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:29:16.857 [2024-11-27 07:24:16.858258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.857 [2024-11-27 07:24:16.858265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.857 [2024-11-27 07:24:16.858272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.857 [2024-11-27 07:24:16.858277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.857 [2024-11-27 07:24:16.858283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.857 [2024-11-27 07:24:16.858288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.857 [2024-11-27 07:24:16.858294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.857 [2024-11-27 07:24:16.858299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.857 [2024-11-27 07:24:16.858304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:29:16.857 [2024-11-27 07:24:16.858344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa2da0 (9): Bad file descriptor 00:29:16.857 [2024-11-27 07:24:16.861296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:29:16.857 [2024-11-27 07:24:16.892577] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:29:16.857 11149.00 IOPS, 43.55 MiB/s [2024-11-27T06:24:28.062Z] 11505.67 IOPS, 44.94 MiB/s [2024-11-27T06:24:28.062Z] 11752.14 IOPS, 45.91 MiB/s [2024-11-27T06:24:28.062Z] 11932.75 IOPS, 46.61 MiB/s [2024-11-27T06:24:28.062Z] 12085.78 IOPS, 47.21 MiB/s [2024-11-27T06:24:28.062Z] [2024-11-27 07:24:21.227239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.857 [2024-11-27 07:24:21.227266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.857 [2024-11-27 07:24:21.227278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.857 [2024-11-27 07:24:21.227284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.857 [2024-11-27 07:24:21.227291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.857 [2024-11-27 07:24:21.227296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.857 [2024-11-27 07:24:21.227303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.857 [2024-11-27 07:24:21.227309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.857 [2024-11-27 07:24:21.227315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.857 [2024-11-27 07:24:21.227320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.857 [2024-11-27 07:24:21.227327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.857 [2024-11-27 07:24:21.227332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.857 [2024-11-27 07:24:21.227339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.857 [2024-11-27 07:24:21.227344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.857 [2024-11-27 07:24:21.227351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.857 [2024-11-27 07:24:21.227356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.857 [2024-11-27 07:24:21.227363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.857 [2024-11-27 07:24:21.227368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.857 [2024-11-27 07:24:21.227375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.857 [2024-11-27 07:24:21.227380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.857 [2024-11-27 07:24:21.227386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.857 [2024-11-27 07:24:21.227391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.857 [2024-11-27 07:24:21.227398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.857 [2024-11-27 07:24:21.227406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.857 [2024-11-27 07:24:21.227413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.857 [2024-11-27 07:24:21.227418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.857 [2024-11-27 07:24:21.227424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.857 [2024-11-27 07:24:21.227429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.857 [2024-11-27 07:24:21.227436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.857 [2024-11-27 07:24:21.227441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.857 [2024-11-27 07:24:21.227447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.857 [2024-11-27 07:24:21.227452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.857 [2024-11-27 07:24:21.227458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.858 [2024-11-27 07:24:21.227463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.858 [2024-11-27 07:24:21.227470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.858 [2024-11-27 07:24:21.227475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.858 [2024-11-27 07:24:21.227481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.858 [2024-11-27 07:24:21.227486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.858 [2024-11-27 07:24:21.227493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.858 [2024-11-27 07:24:21.227498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.858 [2024-11-27 07:24:21.227504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.858 [2024-11-27 07:24:21.227509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.858 [2024-11-27 07:24:21.227515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.858 [2024-11-27 07:24:21.227520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.858 [2024-11-27 07:24:21.227527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.858 [2024-11-27 07:24:21.227532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.858 [2024-11-27 07:24:21.227538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.858 [2024-11-27 07:24:21.227543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.858 [2024-11-27 07:24:21.227554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.858 [2024-11-27 07:24:21.227559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.858 [2024-11-27 07:24:21.227565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.858 [2024-11-27 07:24:21.227570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.858 [2024-11-27 07:24:21.227576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.858 [2024-11-27 07:24:21.227582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.858 [2024-11-27 07:24:21.227589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.858 [2024-11-27 07:24:21.227594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.858 [2024-11-27 07:24:21.227600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.858 [2024-11-27 07:24:21.227605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.858 [2024-11-27 07:24:21.227611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.858 [2024-11-27 07:24:21.227616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.858 [2024-11-27 07:24:21.227623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.858 [2024-11-27 07:24:21.227628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.858 [2024-11-27 07:24:21.227635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.858 [2024-11-27 07:24:21.227640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.858 [2024-11-27 07:24:21.227646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.858 [2024-11-27 07:24:21.227652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.858 [2024-11-27 07:24:21.227658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.858 [2024-11-27 07:24:21.227664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.858 [2024-11-27 07:24:21.227670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.858 [2024-11-27 07:24:21.227675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.858 [2024-11-27 07:24:21.227681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.858 [2024-11-27 07:24:21.227686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.858 [2024-11-27 07:24:21.227693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.858 [2024-11-27 07:24:21.227699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.858 [2024-11-27 07:24:21.227705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.858 [2024-11-27 07:24:21.227710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.858 [2024-11-27 07:24:21.227717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.858 [2024-11-27 07:24:21.227721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.858 [2024-11-27 07:24:21.227728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.858 [2024-11-27 07:24:21.227733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.858 [2024-11-27 07:24:21.227739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.858 [2024-11-27 07:24:21.227744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.858 [2024-11-27 07:24:21.227750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.858 [2024-11-27 07:24:21.227755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.858 [2024-11-27 07:24:21.227761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.858 [2024-11-27 07:24:21.227766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.858 [2024-11-27 07:24:21.227772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.858 [2024-11-27 07:24:21.227777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.858 [2024-11-27 07:24:21.227784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.858 [2024-11-27 07:24:21.227789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.858 [2024-11-27 07:24:21.227795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.858 [2024-11-27 07:24:21.227800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.858 [2024-11-27 07:24:21.227806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.858 [2024-11-27 07:24:21.227811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.858 [2024-11-27 07:24:21.227817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.858 [2024-11-27 07:24:21.227822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.858 [2024-11-27 07:24:21.227828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.858 [2024-11-27 07:24:21.227834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.858 [2024-11-27 07:24:21.227841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.858 [2024-11-27 07:24:21.227847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.858 [2024-11-27 07:24:21.227853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.858 [2024-11-27 07:24:21.227858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.858 [2024-11-27 07:24:21.227864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.859 [2024-11-27 07:24:21.227869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.227875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.859 [2024-11-27 07:24:21.227880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.227887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.859 [2024-11-27 07:24:21.227892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.227898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.859 [2024-11-27 07:24:21.227903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.227909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.859 [2024-11-27 07:24:21.227914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.227920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.859 [2024-11-27 07:24:21.227925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.227932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.859 [2024-11-27 07:24:21.227937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.227943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.859 [2024-11-27 07:24:21.227948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.227954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.859 [2024-11-27 07:24:21.227959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.227965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.859 [2024-11-27 07:24:21.227970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.227976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.859 [2024-11-27 07:24:21.227981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.227988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.859 [2024-11-27 07:24:21.227993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.227999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.859 [2024-11-27 07:24:21.228004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.228010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.859 [2024-11-27 07:24:21.228016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.228022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.859 [2024-11-27 07:24:21.228027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.228033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.859 [2024-11-27 07:24:21.228039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.228045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.859 [2024-11-27 07:24:21.228050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.228057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.859 [2024-11-27 07:24:21.228062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.228068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.859 [2024-11-27 07:24:21.228073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.228079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.859 [2024-11-27 07:24:21.228084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.228091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.859 [2024-11-27 07:24:21.228095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.228102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.859 [2024-11-27 07:24:21.228107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.228114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.859 [2024-11-27 07:24:21.228118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.228125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.859 [2024-11-27 07:24:21.228131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.228137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.859 [2024-11-27 07:24:21.228142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.228149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.859 [2024-11-27 07:24:21.228154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.228165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.859 [2024-11-27 07:24:21.228170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.228176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.859 [2024-11-27 07:24:21.228181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.228187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.859 [2024-11-27 07:24:21.228192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.228199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.859 [2024-11-27 07:24:21.228204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.228210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.859 [2024-11-27 07:24:21.228215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.228221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.859 [2024-11-27 07:24:21.228226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.228232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.859 [2024-11-27 07:24:21.228237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.228243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.859 [2024-11-27 07:24:21.228248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.228254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.859 [2024-11-27 07:24:21.228260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.228266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.859 [2024-11-27 07:24:21.228271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.228278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.859 [2024-11-27 07:24:21.228284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.228290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.859 [2024-11-27 07:24:21.228295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.228313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.859 [2024-11-27 07:24:21.228319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7368 len:8 PRP1 0x0 PRP2 0x0 00:29:16.859 [2024-11-27 07:24:21.228324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.859 [2024-11-27 07:24:21.228464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.859 [2024-11-27 07:24:21.228469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.859 [2024-11-27 07:24:21.228473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7376 len:8 PRP1 0x0 PRP2 0x0 00:29:16.860 [2024-11-27 07:24:21.228478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.860 [2024-11-27 07:24:21.228484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.860 [2024-11-27 07:24:21.228488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.860 [2024-11-27 07:24:21.228492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7384 len:8 PRP1 0x0 PRP2 0x0 00:29:16.860 [2024-11-27 07:24:21.228497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.860 [2024-11-27 07:24:21.228502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.860 [2024-11-27 07:24:21.228506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.860 [2024-11-27 07:24:21.228510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:8 PRP1 0x0 PRP2 0x0 00:29:16.860 [2024-11-27 07:24:21.228515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.860 [2024-11-27 07:24:21.228521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.860 [2024-11-27 07:24:21.228524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.860 [2024-11-27 07:24:21.228529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7400 len:8 PRP1 0x0 PRP2 0x0 00:29:16.860 [2024-11-27 07:24:21.228534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.860 [2024-11-27 07:24:21.228539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.860 [2024-11-27 07:24:21.228543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.860 [2024-11-27 07:24:21.228547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7408 len:8 PRP1 0x0 PRP2 0x0 00:29:16.860 [2024-11-27 07:24:21.228552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.860 [2024-11-27 07:24:21.228557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.860 [2024-11-27 07:24:21.228561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.860 [2024-11-27 07:24:21.228565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7416 len:8 PRP1 0x0 PRP2 0x0 00:29:16.860 [2024-11-27 07:24:21.228570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.860 [2024-11-27 07:24:21.228577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.860 [2024-11-27 07:24:21.228581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.860 [2024-11-27 07:24:21.228585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:8 PRP1 0x0 PRP2 0x0 00:29:16.860 [2024-11-27 07:24:21.228590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.860 [2024-11-27 07:24:21.228595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.860 [2024-11-27 07:24:21.228599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.860 [2024-11-27 07:24:21.228603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7432 len:8 PRP1 0x0 PRP2 0x0 00:29:16.860 [2024-11-27 07:24:21.228608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.860 [2024-11-27 07:24:21.228614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.860 [2024-11-27 07:24:21.228617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.860 [2024-11-27 07:24:21.228622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7440 len:8 PRP1 0x0 PRP2 0x0 00:29:16.860 [2024-11-27 07:24:21.228627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.860 [2024-11-27 07:24:21.228632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.860 [2024-11-27 07:24:21.228635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.860 [2024-11-27 07:24:21.228639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7448 len:8 PRP1 0x0 PRP2 0x0 00:29:16.860 [2024-11-27 07:24:21.228644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.860 [2024-11-27 07:24:21.228649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.860 [2024-11-27 07:24:21.228653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.860 [2024-11-27 07:24:21.228658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:8 PRP1 0x0 PRP2 0x0 00:29:16.860 [2024-11-27 07:24:21.228663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.860 [2024-11-27 07:24:21.228668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.860 [2024-11-27 07:24:21.228672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.860 [2024-11-27 07:24:21.228676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7464 len:8 PRP1 0x0 PRP2 0x0 00:29:16.860 [2024-11-27 07:24:21.228681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.860 [2024-11-27 07:24:21.228686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.860 [2024-11-27 07:24:21.228690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.860 [2024-11-27 07:24:21.228694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7472 len:8 PRP1 0x0 PRP2 0x0 00:29:16.860 [2024-11-27 07:24:21.228699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.860 [2024-11-27 07:24:21.228704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.860 [2024-11-27 07:24:21.228707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.860 [2024-11-27 07:24:21.228711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7480 len:8 PRP1 0x0 PRP2 0x0 00:29:16.860 [2024-11-27 07:24:21.228717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.860 [2024-11-27 07:24:21.228722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.860 [2024-11-27 07:24:21.228727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.860 [2024-11-27 07:24:21.228731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:8 PRP1 0x0 PRP2 0x0 00:29:16.860 [2024-11-27 07:24:21.228736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.860 [2024-11-27 07:24:21.228741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.860 [2024-11-27 07:24:21.228745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.860 [2024-11-27 07:24:21.228749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7496 len:8 PRP1 0x0 PRP2 0x0 00:29:16.860 [2024-11-27 07:24:21.228754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.860 [2024-11-27 07:24:21.228758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.860 [2024-11-27 07:24:21.228762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.860 [2024-11-27 07:24:21.228766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7504 len:8 PRP1 0x0 PRP2 0x0 00:29:16.860 [2024-11-27 07:24:21.228771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.860 [2024-11-27 07:24:21.228776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.860 [2024-11-27 07:24:21.228780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.860 [2024-11-27 07:24:21.228784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7512 len:8 PRP1 0x0 PRP2 0x0 00:29:16.860 [2024-11-27 07:24:21.228789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.860 [2024-11-27 07:24:21.228794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.860 [2024-11-27 07:24:21.228798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.860 [2024-11-27 07:24:21.228802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:8 PRP1 0x0 PRP2 0x0 00:29:16.860 [2024-11-27 07:24:21.228807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.860 [2024-11-27 07:24:21.228813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.860 [2024-11-27 07:24:21.228817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.860 [2024-11-27 07:24:21.228821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7528 len:8 PRP1 0x0 PRP2 0x0 00:29:16.860 [2024-11-27 07:24:21.228826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.860 [2024-11-27 07:24:21.228831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.860 [2024-11-27 07:24:21.228835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.860 [2024-11-27 07:24:21.228839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7536 len:8 PRP1 0x0 PRP2 0x0 00:29:16.860 [2024-11-27 07:24:21.228844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.860 [2024-11-27 07:24:21.228850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.860 [2024-11-27 07:24:21.228854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.860 [2024-11-27 07:24:21.228859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7544 len:8 PRP1 0x0 PRP2 0x0 00:29:16.860 [2024-11-27 07:24:21.228864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.860 [2024-11-27 07:24:21.228869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.860 [2024-11-27 07:24:21.228873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.860 [2024-11-27 07:24:21.228877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6736 len:8 PRP1 0x0 PRP2 0x0 00:29:16.860 [2024-11-27 07:24:21.228882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.860 [2024-11-27 07:24:21.228888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.860 [2024-11-27 07:24:21.228892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.860 [2024-11-27 07:24:21.228896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6744 len:8 PRP1 0x0 PRP2 0x0 00:29:16.860 [2024-11-27 07:24:21.228900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.861 [2024-11-27 07:24:21.228905] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.861 [2024-11-27 07:24:21.228909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.861 [2024-11-27 07:24:21.240185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6752 len:8 PRP1 0x0 PRP2 0x0 00:29:16.861 [2024-11-27 07:24:21.240213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.861 [2024-11-27 07:24:21.240226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.861 [2024-11-27 07:24:21.240234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.861 [2024-11-27 07:24:21.240241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6760 len:8 PRP1 0x0 PRP2 0x0 00:29:16.861 [2024-11-27 07:24:21.240250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.861 [2024-11-27 07:24:21.240258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.861 [2024-11-27 07:24:21.240263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.861 [2024-11-27 07:24:21.240268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6768 len:8 PRP1 0x0 PRP2 0x0 00:29:16.861 [2024-11-27 07:24:21.240275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.861 [2024-11-27 07:24:21.240282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.861 [2024-11-27 07:24:21.240287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.861 [2024-11-27 07:24:21.240294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6776 len:8 PRP1 0x0 PRP2 0x0 00:29:16.861 [2024-11-27 07:24:21.240300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.861 [2024-11-27 07:24:21.240307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.861 [2024-11-27 07:24:21.240313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.861 [2024-11-27 07:24:21.240318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6784 len:8 PRP1 0x0 PRP2 0x0 00:29:16.861 [2024-11-27 07:24:21.240325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.861 [2024-11-27 07:24:21.240337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.861 [2024-11-27 07:24:21.240342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.861 [2024-11-27 07:24:21.240347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6792 len:8 PRP1 0x0 PRP2 0x0 00:29:16.861 [2024-11-27 07:24:21.240354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.861 [2024-11-27 07:24:21.240361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.861 [2024-11-27 07:24:21.240367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.861 [2024-11-27 07:24:21.240373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6800 len:8 PRP1 0x0 PRP2 0x0 00:29:16.861 [2024-11-27 07:24:21.240379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.861 [2024-11-27 07:24:21.240386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.861 [2024-11-27 07:24:21.240391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.861 [2024-11-27 07:24:21.240397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6808 len:8 PRP1 0x0 PRP2 0x0 00:29:16.861 [2024-11-27 07:24:21.240404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.861 [2024-11-27 07:24:21.240410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.861 [2024-11-27 07:24:21.240415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.861 [2024-11-27 07:24:21.240421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6816 len:8 PRP1 0x0 PRP2 0x0 00:29:16.861 [2024-11-27 07:24:21.240428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.861 [2024-11-27 07:24:21.240435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.861 [2024-11-27 07:24:21.240440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.861 [2024-11-27 07:24:21.240446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6824 len:8 PRP1 0x0 PRP2 0x0 00:29:16.861 [2024-11-27 07:24:21.240452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.861 [2024-11-27 07:24:21.240459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.861 [2024-11-27 07:24:21.240464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.861 [2024-11-27 07:24:21.240470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6832 len:8 PRP1 0x0 PRP2 0x0 00:29:16.861 [2024-11-27 07:24:21.240477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.861 [2024-11-27 07:24:21.240485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.861 [2024-11-27 07:24:21.240490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.861 [2024-11-27 07:24:21.240496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6840 len:8 PRP1 0x0 PRP2 0x0 00:29:16.861 [2024-11-27 07:24:21.240503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.861 [2024-11-27 07:24:21.240510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.861 [2024-11-27 07:24:21.240515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.861 [2024-11-27 07:24:21.240521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6848 len:8 PRP1 0x0 PRP2 0x0 00:29:16.861 [2024-11-27 07:24:21.240527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.861 [2024-11-27 07:24:21.240536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.861 [2024-11-27 07:24:21.240541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.861 [2024-11-27 07:24:21.240547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6856 len:8 PRP1 0x0 PRP2 0x0 00:29:16.861 [2024-11-27 07:24:21.240554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.861 [2024-11-27 07:24:21.240561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.861 [2024-11-27 07:24:21.240566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.861 [2024-11-27 07:24:21.240571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6528 len:8 PRP1 0x0 PRP2 0x0 00:29:16.861 [2024-11-27 07:24:21.240578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.861 [2024-11-27 07:24:21.240585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.861 [2024-11-27 07:24:21.240590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.861 [2024-11-27 07:24:21.240596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6536 len:8 PRP1 0x0 PRP2 0x0 00:29:16.861 [2024-11-27 07:24:21.240603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.861 [2024-11-27 07:24:21.240610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.861 [2024-11-27 07:24:21.240615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.861 [2024-11-27 07:24:21.240620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6544 len:8 PRP1 0x0 PRP2 0x0 00:29:16.861 [2024-11-27 07:24:21.240627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.861 [2024-11-27 07:24:21.240634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.861 [2024-11-27 07:24:21.240640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.861 [2024-11-27 07:24:21.240645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6552 len:8 PRP1 0x0 PRP2 0x0 00:29:16.861 [2024-11-27 07:24:21.240653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.861 [2024-11-27 07:24:21.240660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.861 [2024-11-27 07:24:21.240664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.861 [2024-11-27 07:24:21.240670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6560 len:8 PRP1 0x0 PRP2 0x0 00:29:16.861 [2024-11-27 07:24:21.240677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.861 [2024-11-27 07:24:21.240684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.861 [2024-11-27 07:24:21.240689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.861 [2024-11-27 07:24:21.240695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6568 len:8 PRP1 0x0 PRP2 0x0 00:29:16.861 [2024-11-27 07:24:21.240701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.861 [2024-11-27 07:24:21.240708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.861 [2024-11-27 07:24:21.240714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.861 [2024-11-27 07:24:21.240719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6576 len:8 PRP1 0x0 PRP2 0x0 00:29:16.861 [2024-11-27 07:24:21.240727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.861 [2024-11-27 07:24:21.240734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.861 [2024-11-27 07:24:21.240739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.861 [2024-11-27 07:24:21.240745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6584 len:8 PRP1 0x0 PRP2 0x0 00:29:16.861 [2024-11-27 07:24:21.240752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.861 [2024-11-27 07:24:21.240759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.861 [2024-11-27 07:24:21.240764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.861 [2024-11-27 07:24:21.240770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6592 len:8 PRP1 0x0 PRP2 0x0 00:29:16.861 [2024-11-27 07:24:21.240776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.861 [2024-11-27 07:24:21.240784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.861 [2024-11-27 07:24:21.240789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.862 [2024-11-27 07:24:21.240795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6600 len:8 PRP1 0x0 PRP2 0x0 00:29:16.862 [2024-11-27 07:24:21.240801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.862 [2024-11-27 07:24:21.240808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.862 [2024-11-27 07:24:21.240813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.862 [2024-11-27 07:24:21.240819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6608 len:8 PRP1 0x0 PRP2 0x0 00:29:16.862 [2024-11-27 07:24:21.240826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.862 [2024-11-27 07:24:21.240833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.862 [2024-11-27 07:24:21.240838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.862 [2024-11-27 07:24:21.240844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6616 len:8 PRP1 0x0 PRP2 0x0 00:29:16.862 [2024-11-27 07:24:21.240850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.862 [2024-11-27 07:24:21.240857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.862 [2024-11-27 07:24:21.240863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.862 [2024-11-27 07:24:21.240868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6624 len:8 PRP1 0x0 PRP2 0x0 00:29:16.862 [2024-11-27 07:24:21.240875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.862 [2024-11-27 07:24:21.240882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.862 [2024-11-27 07:24:21.240887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.862 [2024-11-27 07:24:21.240892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6632 len:8 PRP1 0x0 PRP2 0x0 00:29:16.862 [2024-11-27 07:24:21.240899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.862 [2024-11-27 07:24:21.240906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.862 [2024-11-27 07:24:21.240911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.862 [2024-11-27 07:24:21.240918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6640 len:8 PRP1 0x0 PRP2 0x0 00:29:16.862 [2024-11-27 07:24:21.240925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.862 [2024-11-27 07:24:21.240931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.862 [2024-11-27 07:24:21.240936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.862 [2024-11-27 07:24:21.240942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6648 len:8 PRP1 0x0 PRP2 0x0 00:29:16.862 [2024-11-27 07:24:21.240949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.862 [2024-11-27 07:24:21.240956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.862 [2024-11-27 07:24:21.240961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.862 [2024-11-27 07:24:21.240966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6656 len:8 PRP1 0x0 PRP2 0x0 00:29:16.862 [2024-11-27 07:24:21.240973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.862 [2024-11-27 07:24:21.240980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.862 [2024-11-27 07:24:21.240986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.862 [2024-11-27 07:24:21.240991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6664 len:8 PRP1 0x0 PRP2 0x0 00:29:16.862 [2024-11-27 07:24:21.240997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.862 [2024-11-27 07:24:21.241004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.862 [2024-11-27 07:24:21.241010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.862 [2024-11-27 07:24:21.241015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6672 len:8 PRP1 0x0 PRP2 0x0 00:29:16.862 [2024-11-27 07:24:21.241022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.862 [2024-11-27 07:24:21.241029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.862 [2024-11-27 07:24:21.241034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.862 [2024-11-27 07:24:21.241039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6680 len:8 PRP1 0x0 PRP2 0x0 00:29:16.862 [2024-11-27 07:24:21.241046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.862 [2024-11-27 07:24:21.241053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.862 [2024-11-27 07:24:21.241058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.862 [2024-11-27 07:24:21.241063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6688 len:8 PRP1 0x0 PRP2 0x0 00:29:16.862 [2024-11-27 07:24:21.241070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.862 [2024-11-27 07:24:21.241077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.862 [2024-11-27 07:24:21.241081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.862 [2024-11-27 07:24:21.241087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6696 len:8 PRP1 0x0 PRP2 0x0 00:29:16.862 [2024-11-27 07:24:21.241094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.862 [2024-11-27 07:24:21.241102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.862 [2024-11-27 07:24:21.241107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.862 [2024-11-27 07:24:21.241113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6704 len:8 PRP1 0x0 PRP2 0x0 00:29:16.862 [2024-11-27 07:24:21.241119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.862 [2024-11-27 07:24:21.241127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.862 [2024-11-27 07:24:21.241132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.862 [2024-11-27 07:24:21.241137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6712 len:8 PRP1 0x0 PRP2 0x0 00:29:16.862 [2024-11-27 07:24:21.241144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.862 [2024-11-27 07:24:21.241151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.862 [2024-11-27 07:24:21.241156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.862 [2024-11-27 07:24:21.241169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6864 len:8 PRP1 0x0 PRP2 0x0 00:29:16.862 [2024-11-27 07:24:21.241176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.862 [2024-11-27 07:24:21.241183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.862 [2024-11-27 07:24:21.241188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.862 [2024-11-27 07:24:21.241194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6872 len:8 PRP1 0x0 PRP2 0x0 00:29:16.862 [2024-11-27 07:24:21.241200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.862 [2024-11-27 07:24:21.241207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.862 [2024-11-27 07:24:21.241212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.862 [2024-11-27 07:24:21.241218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:8 PRP1 0x0 PRP2 0x0 00:29:16.862 [2024-11-27 07:24:21.241224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.862 [2024-11-27 07:24:21.241231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.862 [2024-11-27 07:24:21.241236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.862 [2024-11-27 07:24:21.241241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6888 len:8 PRP1 0x0 PRP2 0x0 00:29:16.862 [2024-11-27 07:24:21.241248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.862 [2024-11-27 07:24:21.241255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.862 [2024-11-27 07:24:21.241260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.862 [2024-11-27 07:24:21.241265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6896 len:8 PRP1 0x0 PRP2 0x0 00:29:16.862 [2024-11-27 07:24:21.241272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.862 [2024-11-27 07:24:21.241279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.862 [2024-11-27 07:24:21.241284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.862 [2024-11-27 07:24:21.241290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6904 len:8 PRP1 0x0 PRP2 0x0 00:29:16.862 [2024-11-27 07:24:21.241298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.862 [2024-11-27 07:24:21.241306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.862 [2024-11-27 07:24:21.241311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.862 [2024-11-27 07:24:21.241316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:8 PRP1 0x0 PRP2 0x0 00:29:16.862 [2024-11-27 07:24:21.241323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.862 [2024-11-27 07:24:21.241330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.862 [2024-11-27 07:24:21.241335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.862 [2024-11-27 07:24:21.241341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6920 len:8 PRP1 0x0 PRP2 0x0 00:29:16.862 [2024-11-27 07:24:21.241347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.862 [2024-11-27 07:24:21.241354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.862 [2024-11-27 07:24:21.241359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.862 [2024-11-27 07:24:21.241366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6928 len:8 PRP1 0x0 PRP2 0x0 00:29:16.863 [2024-11-27 07:24:21.241373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.863 [2024-11-27 07:24:21.241380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.863 [2024-11-27 07:24:21.241385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.863 [2024-11-27 07:24:21.241391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6936 len:8 PRP1 0x0 PRP2 0x0 00:29:16.863 [2024-11-27 07:24:21.241397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.863 [2024-11-27 07:24:21.241404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.863 [2024-11-27 07:24:21.241409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.863 [2024-11-27 07:24:21.241415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:8 PRP1 0x0 PRP2 0x0 00:29:16.863 [2024-11-27 07:24:21.241421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.863 [2024-11-27 07:24:21.241428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.863 [2024-11-27 07:24:21.241433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.863 [2024-11-27 07:24:21.241439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6952 len:8 PRP1 0x0 PRP2 0x0 00:29:16.863 [2024-11-27 07:24:21.241445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.863 [2024-11-27 07:24:21.241453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.863 [2024-11-27 07:24:21.241458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.863 [2024-11-27 07:24:21.241463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6960 len:8 PRP1 0x0 PRP2 0x0 00:29:16.863 [2024-11-27 07:24:21.241470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.863 [2024-11-27 07:24:21.241477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.863 [2024-11-27 07:24:21.241482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.863 [2024-11-27 07:24:21.241492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6968 len:8 PRP1 0x0 PRP2 0x0 00:29:16.863 [2024-11-27 07:24:21.241499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.863 [2024-11-27 07:24:21.241506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.863 [2024-11-27 07:24:21.241511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.863 [2024-11-27 07:24:21.241516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:8 PRP1 0x0 PRP2 0x0 00:29:16.863 [2024-11-27 07:24:21.241523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.863 [2024-11-27 07:24:21.241530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.863 [2024-11-27 07:24:21.241535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.863 [2024-11-27 07:24:21.241541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6984 len:8 PRP1 0x0 PRP2 0x0 00:29:16.863 [2024-11-27 07:24:21.241547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.863 [2024-11-27 07:24:21.241554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.863 [2024-11-27 07:24:21.241559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.863 [2024-11-27 07:24:21.241565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6992 len:8 PRP1 0x0 PRP2 0x0 00:29:16.863 [2024-11-27 07:24:21.241572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.863 [2024-11-27 07:24:21.241579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.863 [2024-11-27 07:24:21.241584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.863 [2024-11-27 07:24:21.241589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7000 len:8 PRP1 0x0 PRP2 0x0 00:29:16.863 [2024-11-27 07:24:21.241596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.863 [2024-11-27 07:24:21.241604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.863 [2024-11-27 07:24:21.241609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.863 [2024-11-27 07:24:21.241614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:8 PRP1 0x0 PRP2 0x0 00:29:16.863 [2024-11-27 07:24:21.241621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.863 [2024-11-27 07:24:21.241628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.863 [2024-11-27 07:24:21.241633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.863 [2024-11-27 07:24:21.241638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7016 len:8 PRP1 0x0 PRP2 0x0 00:29:16.863 [2024-11-27 07:24:21.241645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.863 [2024-11-27 07:24:21.241652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.863 [2024-11-27 07:24:21.241657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.863 [2024-11-27 07:24:21.241663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7024 len:8 PRP1 0x0 PRP2 0x0 00:29:16.863 [2024-11-27 07:24:21.241669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.863 [2024-11-27 07:24:21.241677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.863 [2024-11-27 07:24:21.241683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.863 [2024-11-27 07:24:21.241689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7032 len:8 PRP1 0x0 PRP2 0x0 00:29:16.863 [2024-11-27 07:24:21.241695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.863 [2024-11-27 07:24:21.241702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.863 [2024-11-27 07:24:21.241707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.863 [2024-11-27 07:24:21.241712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:8 PRP1 0x0 PRP2 0x0 00:29:16.863 [2024-11-27 07:24:21.241719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.863 [2024-11-27 07:24:21.241726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.863 [2024-11-27 07:24:21.241731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.863 [2024-11-27 07:24:21.241736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7048 len:8 PRP1 0x0 PRP2 0x0 00:29:16.863 [2024-11-27 07:24:21.241743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.863 [2024-11-27 07:24:21.241750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.863 [2024-11-27 07:24:21.241755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.863 [2024-11-27 07:24:21.241761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7056 len:8 PRP1 0x0 PRP2 0x0 00:29:16.863 [2024-11-27 07:24:21.241768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.863 [2024-11-27 07:24:21.251882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.863 [2024-11-27 07:24:21.251913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.863 [2024-11-27 07:24:21.251925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7064 len:8 PRP1 0x0 PRP2 0x0 00:29:16.863 [2024-11-27 07:24:21.251937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.863 [2024-11-27 07:24:21.251948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.863 [2024-11-27 07:24:21.251955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.863 [2024-11-27 07:24:21.251963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:8 PRP1 0x0 PRP2 0x0 00:29:16.863 [2024-11-27 07:24:21.251973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.863 [2024-11-27 07:24:21.251982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.863 [2024-11-27 07:24:21.251989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.863 [2024-11-27 07:24:21.251997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7080 len:8 PRP1 0x0 PRP2 0x0 00:29:16.863 [2024-11-27 07:24:21.252006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.863 [2024-11-27 07:24:21.252016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.863 [2024-11-27 07:24:21.252022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.863 [2024-11-27 07:24:21.252030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7088 len:8 PRP1 0x0 PRP2 0x0 00:29:16.863 [2024-11-27 07:24:21.252039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.864 [2024-11-27 07:24:21.252055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.864 [2024-11-27 07:24:21.252062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.864 [2024-11-27 07:24:21.252070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7096 len:8 PRP1 0x0 PRP2 0x0 00:29:16.864 [2024-11-27 07:24:21.252079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.864 [2024-11-27 07:24:21.252088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.864 [2024-11-27 07:24:21.252095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.864 [2024-11-27 07:24:21.252103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:8 PRP1 0x0 PRP2 0x0 00:29:16.864 [2024-11-27 07:24:21.252111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.864 [2024-11-27 07:24:21.252121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.864 [2024-11-27 07:24:21.252128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.864 [2024-11-27 07:24:21.252135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7112 len:8 PRP1 0x0 PRP2 0x0 00:29:16.864 [2024-11-27 07:24:21.252144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.864 [2024-11-27 07:24:21.252154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.864 [2024-11-27 07:24:21.252182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.864 [2024-11-27 07:24:21.252190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7120 len:8 PRP1 0x0 PRP2 0x0 00:29:16.864 [2024-11-27 07:24:21.252199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.864 [2024-11-27 07:24:21.252208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.864 [2024-11-27 07:24:21.252215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.864 [2024-11-27 07:24:21.252222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7128 len:8 PRP1 0x0 PRP2 0x0 00:29:16.864 [2024-11-27 07:24:21.252232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.864 [2024-11-27 07:24:21.252242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.864 [2024-11-27 07:24:21.252249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.864 [2024-11-27 07:24:21.252257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:8 PRP1 0x0 PRP2 0x0 00:29:16.864 [2024-11-27 07:24:21.252266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.864 [2024-11-27 07:24:21.252275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.864 [2024-11-27 07:24:21.252282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.864 [2024-11-27 07:24:21.252290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7144 len:8 PRP1 0x0 PRP2 0x0 00:29:16.864 [2024-11-27 07:24:21.252298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.864 [2024-11-27 07:24:21.252308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.864 [2024-11-27 07:24:21.252314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.864 [2024-11-27 07:24:21.252322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7152 len:8 PRP1 0x0 PRP2 0x0 00:29:16.864 [2024-11-27 07:24:21.252334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.864 [2024-11-27 07:24:21.252345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.864 [2024-11-27 07:24:21.252352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.864 [2024-11-27 07:24:21.252359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7160 len:8 PRP1 0x0 PRP2 0x0 00:29:16.864 [2024-11-27 07:24:21.252368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.864 [2024-11-27 07:24:21.252377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.864 [2024-11-27 07:24:21.252384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.864 [2024-11-27 07:24:21.252392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:8 PRP1 0x0 PRP2 0x0 00:29:16.864 [2024-11-27 07:24:21.252401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.864 [2024-11-27 07:24:21.252411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.864 [2024-11-27 07:24:21.252418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.864 [2024-11-27 07:24:21.252426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7176 len:8 PRP1 0x0 PRP2 0x0 00:29:16.864 [2024-11-27 07:24:21.252435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.864 [2024-11-27 07:24:21.252445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.864 [2024-11-27 07:24:21.252452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.864 [2024-11-27 07:24:21.252459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7184 len:8 PRP1 0x0 PRP2 0x0 00:29:16.864 [2024-11-27 07:24:21.252468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.864 [2024-11-27 07:24:21.252478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.864 [2024-11-27 07:24:21.252485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.864 [2024-11-27 07:24:21.252492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7192 len:8 PRP1 0x0 PRP2 0x0 00:29:16.864 [2024-11-27 07:24:21.252502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.864 [2024-11-27 07:24:21.252511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.864 [2024-11-27 07:24:21.252518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.864 [2024-11-27 07:24:21.252525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:8 PRP1 0x0 PRP2 0x0 00:29:16.864 [2024-11-27 07:24:21.252534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.864 [2024-11-27 07:24:21.252544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.864 [2024-11-27 07:24:21.252551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.864 [2024-11-27 07:24:21.252559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7208 len:8 PRP1 0x0 PRP2 0x0 00:29:16.864 [2024-11-27 07:24:21.252568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.864 [2024-11-27 07:24:21.252578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.864 [2024-11-27 07:24:21.252585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.864 [2024-11-27 07:24:21.252594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7216 len:8 PRP1 0x0 PRP2 0x0 00:29:16.864 [2024-11-27 07:24:21.252603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.864 [2024-11-27 07:24:21.252613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.864 [2024-11-27 07:24:21.252620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.864 [2024-11-27 07:24:21.252628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7224 len:8 PRP1 0x0 PRP2 0x0 00:29:16.864 [2024-11-27 07:24:21.252638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.864 [2024-11-27 07:24:21.252647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.864 [2024-11-27 07:24:21.252654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.864 [2024-11-27 07:24:21.252662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:8 PRP1 0x0 PRP2 0x0 00:29:16.864 [2024-11-27 07:24:21.252671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.864 [2024-11-27 07:24:21.252680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.864 [2024-11-27 07:24:21.252687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.864 [2024-11-27 07:24:21.252695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7240 len:8 PRP1 0x0 PRP2 0x0 00:29:16.864 [2024-11-27 07:24:21.252705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.864 [2024-11-27 07:24:21.252715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.864 [2024-11-27 07:24:21.252722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.864 [2024-11-27 07:24:21.252729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7248 len:8 PRP1 0x0 PRP2 0x0 00:29:16.864 [2024-11-27 07:24:21.252738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.864 [2024-11-27 07:24:21.252748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.864 [2024-11-27 07:24:21.252754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.864 [2024-11-27 07:24:21.252762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7256 len:8 PRP1 0x0 PRP2 0x0 00:29:16.864 [2024-11-27 07:24:21.252772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.864 [2024-11-27 07:24:21.252781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.864 [2024-11-27 07:24:21.252788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.864 [2024-11-27 07:24:21.252796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:8 PRP1 0x0 PRP2 0x0 00:29:16.864 [2024-11-27 07:24:21.252805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.864 [2024-11-27 07:24:21.252814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.864 [2024-11-27 07:24:21.252821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.864 [2024-11-27 07:24:21.252829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7272 len:8 PRP1 0x0 PRP2 0x0 00:29:16.864 [2024-11-27 07:24:21.252838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.864 [2024-11-27 07:24:21.252849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.865 [2024-11-27 07:24:21.252858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.865 [2024-11-27 07:24:21.252866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7280 len:8 PRP1 0x0 PRP2 0x0 00:29:16.865 [2024-11-27 07:24:21.252875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.865 [2024-11-27 07:24:21.252884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.865 [2024-11-27 07:24:21.252891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.865 [2024-11-27 07:24:21.252899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7288 len:8 PRP1 0x0 PRP2 0x0 00:29:16.865 [2024-11-27 07:24:21.252908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.865 [2024-11-27 07:24:21.252917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.865 [2024-11-27 07:24:21.252923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.865 [2024-11-27 07:24:21.252931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:8 PRP1 0x0 PRP2 0x0 00:29:16.865 [2024-11-27 07:24:21.252940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.865 [2024-11-27 07:24:21.252949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.865 [2024-11-27 07:24:21.252956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.865 [2024-11-27 07:24:21.252964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7304 len:8 PRP1 0x0 PRP2 0x0 00:29:16.865 [2024-11-27 07:24:21.252973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.865 [2024-11-27 07:24:21.252983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.865 [2024-11-27 07:24:21.252990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.865 [2024-11-27 07:24:21.252998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7312 len:8 PRP1 0x0 PRP2 0x0 00:29:16.865 [2024-11-27 07:24:21.253006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.865 [2024-11-27 07:24:21.253016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.865 [2024-11-27 07:24:21.253023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.865 [2024-11-27 07:24:21.253030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7320 len:8 PRP1 0x0 PRP2 0x0 00:29:16.865 [2024-11-27 07:24:21.253039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.865 [2024-11-27 07:24:21.253049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.865 [2024-11-27 07:24:21.253056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.865 [2024-11-27 07:24:21.253064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:8 PRP1 0x0 PRP2 0x0 00:29:16.865 [2024-11-27 07:24:21.253073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.865 [2024-11-27 07:24:21.253082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.865 [2024-11-27 07:24:21.253089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.865 [2024-11-27 07:24:21.253097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7336 len:8 PRP1 0x0 PRP2 0x0 00:29:16.865 [2024-11-27 07:24:21.253106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.865 [2024-11-27 07:24:21.253118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.865 [2024-11-27 07:24:21.253125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.865 [2024-11-27 07:24:21.253132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7344 len:8 PRP1 0x0 PRP2 0x0 00:29:16.865 [2024-11-27 07:24:21.253142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.865 [2024-11-27 07:24:21.253152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.865 [2024-11-27 07:24:21.253164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.865 [2024-11-27 07:24:21.253172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7352 len:8 PRP1 0x0 PRP2 0x0 00:29:16.865 [2024-11-27 07:24:21.253181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.865 [2024-11-27 07:24:21.253190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.865 [2024-11-27 07:24:21.253197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.865 [2024-11-27 07:24:21.253205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:8 PRP1 0x0 PRP2 0x0 00:29:16.865 [2024-11-27 07:24:21.253214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.865 [2024-11-27 07:24:21.253224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.865 [2024-11-27 07:24:21.253231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.865 [2024-11-27 07:24:21.253238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6720 len:8 PRP1 0x0 PRP2 0x0 00:29:16.865 [2024-11-27 07:24:21.253247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.865 [2024-11-27 07:24:21.253257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.865 [2024-11-27 07:24:21.253264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.865 [2024-11-27 07:24:21.253271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6728 len:8 PRP1 0x0 PRP2 0x0 00:29:16.865 [2024-11-27 07:24:21.253280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.865 [2024-11-27 07:24:21.253290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.865 [2024-11-27 07:24:21.253298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.865 [2024-11-27 07:24:21.253305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7368 len:8 PRP1 0x0 PRP2 0x0 00:29:16.865 [2024-11-27 07:24:21.253314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.865 [2024-11-27 07:24:21.253365] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:29:16.865 [2024-11-27 07:24:21.253401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.865 [2024-11-27 07:24:21.253413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.865 [2024-11-27 07:24:21.253426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.865 [2024-11-27 07:24:21.253435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.865 [2024-11-27 07:24:21.253455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.865 [2024-11-27 07:24:21.253464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.865 [2024-11-27 07:24:21.253474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.865 [2024-11-27 07:24:21.253483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.865 [2024-11-27 07:24:21.253493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:29:16.865 [2024-11-27 07:24:21.253543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa2da0 (9): Bad file descriptor 00:29:16.865 [2024-11-27 07:24:21.258078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:29:16.865 [2024-11-27 07:24:21.286267] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:29:16.865 12127.10 IOPS, 47.37 MiB/s [2024-11-27T06:24:28.070Z] 12208.27 IOPS, 47.69 MiB/s [2024-11-27T06:24:28.070Z] 12284.50 IOPS, 47.99 MiB/s [2024-11-27T06:24:28.070Z] 12338.23 IOPS, 48.20 MiB/s [2024-11-27T06:24:28.070Z] 12382.79 IOPS, 48.37 MiB/s 00:29:16.865 Latency(us) 00:29:16.865 [2024-11-27T06:24:28.070Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:16.865 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:16.865 Verification LBA range: start 0x0 length 0x4000 00:29:16.865 NVMe0n1 : 15.01 12430.87 48.56 519.62 0.00 9862.71 375.47 35389.44 00:29:16.865 [2024-11-27T06:24:28.070Z] =================================================================================================================== 00:29:16.865 [2024-11-27T06:24:28.070Z] Total : 12430.87 48.56 519.62 0.00 9862.71 375.47 35389.44 00:29:16.865 Received shutdown signal, test time was about 15.000000 seconds 00:29:16.865 00:29:16.865 Latency(us) 00:29:16.865 [2024-11-27T06:24:28.070Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:16.865 [2024-11-27T06:24:28.070Z] =================================================================================================================== 00:29:16.865 [2024-11-27T06:24:28.070Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:16.865 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:29:16.865 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:29:16.865 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:29:16.865 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2511990 00:29:16.865 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2511990 /var/tmp/bdevperf.sock 00:29:16.865 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:29:16.865 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2511990 ']' 00:29:16.865 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:16.865 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:16.865 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:16.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:16.865 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:16.866 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:17.127 07:24:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:17.127 07:24:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:29:17.127 07:24:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:17.387 [2024-11-27 07:24:28.449026] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:17.387 07:24:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:17.647 [2024-11-27 07:24:28.633506] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:17.647 07:24:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:29:17.907 NVMe0n1 00:29:17.907 07:24:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:29:18.483 00:29:18.483 07:24:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:29:18.742 00:29:18.743 07:24:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:18.743 07:24:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:29:18.743 07:24:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:19.002 07:24:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:29:22.295 07:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:22.295 07:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:29:22.295 07:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2513060 00:29:22.295 07:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:22.295 07:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2513060 00:29:23.239 { 00:29:23.239 "results": [ 00:29:23.239 { 00:29:23.239 "job": "NVMe0n1", 00:29:23.239 "core_mask": "0x1", 00:29:23.239 "workload": "verify", 00:29:23.239 "status": "finished", 00:29:23.239 "verify_range": { 00:29:23.239 "start": 0, 00:29:23.239 "length": 16384 00:29:23.239 }, 00:29:23.239 "queue_depth": 128, 00:29:23.239 "io_size": 4096, 00:29:23.239 "runtime": 1.007394, 00:29:23.239 "iops": 12770.574373085406, 00:29:23.239 "mibps": 49.88505614486487, 00:29:23.239 "io_failed": 0, 00:29:23.239 "io_timeout": 0, 00:29:23.239 "avg_latency_us": 9989.136317139526, 00:29:23.239 "min_latency_us": 1925.12, 00:29:23.239 "max_latency_us": 8465.066666666668 00:29:23.239 } 00:29:23.239 ], 00:29:23.239 "core_count": 1 00:29:23.239 } 00:29:23.239 07:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:23.239 [2024-11-27 07:24:27.485003] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:29:23.239 [2024-11-27 07:24:27.485061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2511990 ] 00:29:23.239 [2024-11-27 07:24:27.567928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.239 [2024-11-27 07:24:27.596359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.239 [2024-11-27 07:24:30.075478] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:23.239 [2024-11-27 07:24:30.075520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:23.239 [2024-11-27 07:24:30.075529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:23.239 [2024-11-27 07:24:30.075537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:23.239 [2024-11-27 07:24:30.075543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:23.239 [2024-11-27 07:24:30.075549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:23.239 [2024-11-27 07:24:30.075554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:23.239 [2024-11-27 07:24:30.075560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:23.239 [2024-11-27 07:24:30.075565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:23.239 [2024-11-27 07:24:30.075575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:29:23.239 [2024-11-27 07:24:30.075596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:29:23.239 [2024-11-27 07:24:30.075607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1789da0 (9): Bad file descriptor 00:29:23.239 [2024-11-27 07:24:30.209321] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:29:23.239 Running I/O for 1 seconds... 00:29:23.239 12737.00 IOPS, 49.75 MiB/s 00:29:23.239 Latency(us) 00:29:23.239 [2024-11-27T06:24:34.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:23.239 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:23.239 Verification LBA range: start 0x0 length 0x4000 00:29:23.239 NVMe0n1 : 1.01 12770.57 49.89 0.00 0.00 9989.14 1925.12 8465.07 00:29:23.239 [2024-11-27T06:24:34.444Z] =================================================================================================================== 00:29:23.239 [2024-11-27T06:24:34.444Z] Total : 12770.57 49.89 0.00 0.00 9989.14 1925.12 8465.07 00:29:23.239 07:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:23.239 07:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:29:23.500 07:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:23.760 07:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:29:23.760 07:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:24.021 07:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:24.021 07:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:29:27.325 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:27.325 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:29:27.325 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2511990 00:29:27.325 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2511990 ']' 00:29:27.325 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2511990 00:29:27.325 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:29:27.325 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:27.325 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2511990 00:29:27.325 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:27.325 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:27.325 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2511990' 00:29:27.325 killing process with pid 2511990 00:29:27.325 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2511990 00:29:27.325 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2511990 00:29:27.325 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:29:27.325 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:27.586 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:29:27.586 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:27.586 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:29:27.586 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:27.586 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:29:27.586 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:27.586 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:29:27.586 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:27.586 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:27.586 rmmod nvme_tcp 00:29:27.586 rmmod nvme_fabrics 00:29:27.586 rmmod nvme_keyring 00:29:27.586 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:27.848 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:29:27.848 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:29:27.848 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2507809 ']' 00:29:27.848 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2507809 00:29:27.848 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2507809 ']' 00:29:27.848 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2507809 00:29:27.848 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:29:27.848 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:27.848 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2507809 00:29:27.848 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:27.848 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:27.848 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2507809' 00:29:27.848 killing process with pid 2507809 00:29:27.848 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2507809 00:29:27.848 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2507809 00:29:27.848 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:27.848 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:27.848 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:27.848 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:29:27.848 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:29:27.848 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:27.848 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:29:27.848 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:27.848 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:27.848 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.848 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:27.848 07:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.397 07:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:30.397 00:29:30.397 real 0m40.639s 00:29:30.397 user 2m4.613s 00:29:30.397 sys 0m9.009s 00:29:30.397 07:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:30.397 07:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:30.397 ************************************ 00:29:30.397 END TEST nvmf_failover 00:29:30.397 ************************************ 00:29:30.397 07:24:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:30.397 07:24:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:30.397 07:24:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:30.397 07:24:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.397 ************************************ 00:29:30.397 START TEST nvmf_host_discovery 00:29:30.397 ************************************ 00:29:30.397 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:30.397 * Looking for test storage... 00:29:30.397 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:30.397 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:30.397 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:29:30.397 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:30.397 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:30.397 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:30.397 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:30.397 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:30.397 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:29:30.397 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:29:30.397 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:29:30.397 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:29:30.397 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:29:30.397 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:29:30.397 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:29:30.397 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:30.397 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:29:30.397 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:29:30.397 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:30.397 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:30.397 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:29:30.397 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:29:30.397 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:30.397 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:29:30.397 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:29:30.397 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:29:30.397 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:29:30.397 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:30.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.398 --rc genhtml_branch_coverage=1 00:29:30.398 --rc genhtml_function_coverage=1 00:29:30.398 --rc genhtml_legend=1 00:29:30.398 --rc geninfo_all_blocks=1 00:29:30.398 --rc geninfo_unexecuted_blocks=1 00:29:30.398 00:29:30.398 ' 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:30.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.398 --rc genhtml_branch_coverage=1 00:29:30.398 --rc genhtml_function_coverage=1 00:29:30.398 --rc genhtml_legend=1 00:29:30.398 --rc geninfo_all_blocks=1 00:29:30.398 --rc geninfo_unexecuted_blocks=1 00:29:30.398 00:29:30.398 ' 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:30.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.398 --rc genhtml_branch_coverage=1 00:29:30.398 --rc genhtml_function_coverage=1 00:29:30.398 --rc genhtml_legend=1 00:29:30.398 --rc geninfo_all_blocks=1 00:29:30.398 --rc geninfo_unexecuted_blocks=1 00:29:30.398 00:29:30.398 ' 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:30.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.398 --rc genhtml_branch_coverage=1 00:29:30.398 --rc genhtml_function_coverage=1 00:29:30.398 --rc genhtml_legend=1 00:29:30.398 --rc geninfo_all_blocks=1 00:29:30.398 --rc geninfo_unexecuted_blocks=1 00:29:30.398 00:29:30.398 ' 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:30.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:29:30.398 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:38.548 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:38.548 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:29:38.548 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:38.548 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:38.548 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:38.548 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:38.548 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:38.548 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:29:38.548 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:38.548 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:29:38.548 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:38.549 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:38.549 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:38.549 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:38.549 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:38.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:38.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:29:38.549 00:29:38.549 --- 10.0.0.2 ping statistics --- 00:29:38.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:38.549 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:38.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:38.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:29:38.549 00:29:38.549 --- 10.0.0.1 ping statistics --- 00:29:38.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:38.549 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:38.549 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:38.550 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:38.550 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2518370 00:29:38.550 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2518370 00:29:38.550 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:38.550 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2518370 ']' 00:29:38.550 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:38.550 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:38.550 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:38.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:38.550 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:38.550 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:38.550 [2024-11-27 07:24:48.930393] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:29:38.550 [2024-11-27 07:24:48.930456] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:38.550 [2024-11-27 07:24:49.032324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.550 [2024-11-27 07:24:49.082346] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:38.550 [2024-11-27 07:24:49.082424] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:38.550 [2024-11-27 07:24:49.082434] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:38.550 [2024-11-27 07:24:49.082441] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:38.550 [2024-11-27 07:24:49.082448] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:38.550 [2024-11-27 07:24:49.083265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:38.812 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:38.812 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:29:38.812 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:38.812 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:38.812 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:38.812 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:38.812 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:38.812 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.812 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:38.812 [2024-11-27 07:24:49.814464] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:38.812 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.812 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:29:38.812 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.812 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:38.812 [2024-11-27 07:24:49.826762] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:38.812 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.812 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:29:38.812 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.812 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:38.812 null0 00:29:38.812 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.812 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:29:38.812 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.812 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:38.812 null1 00:29:38.812 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.812 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:29:38.812 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.812 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:38.812 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.812 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2518655 00:29:38.812 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2518655 /tmp/host.sock 00:29:38.812 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:29:38.812 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2518655 ']' 00:29:38.812 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:29:38.812 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:38.812 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:38.812 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:38.812 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:38.812 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:38.812 [2024-11-27 07:24:49.924513] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:29:38.812 [2024-11-27 07:24:49.924580] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2518655 ] 00:29:39.074 [2024-11-27 07:24:50.018470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.074 [2024-11-27 07:24:50.074718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:39.647 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:39.647 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:29:39.647 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:39.647 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:29:39.647 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.647 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:39.647 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.647 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:29:39.647 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.647 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:39.647 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.647 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:29:39.647 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:29:39.647 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:39.647 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:39.647 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.647 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:39.648 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:39.648 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:39.648 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.648 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:29:39.648 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:29:39.648 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:39.648 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:39.648 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.648 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:39.648 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:39.648 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:39.648 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.910 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:29:39.910 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:29:39.910 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.910 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:39.910 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.910 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:29:39.910 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:39.910 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.910 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:39.910 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:39.910 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:39.910 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:39.910 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.910 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:29:39.910 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:29:39.910 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:39.910 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:39.910 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.910 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:39.910 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:39.910 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:39.910 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.910 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:29:39.910 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:29:39.911 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.911 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:39.911 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.911 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:29:39.911 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:39.911 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.911 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:39.911 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:39.911 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:39.911 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:39.911 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.911 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:29:39.911 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:29:39.911 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:39.911 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:39.911 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.911 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:39.911 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:39.911 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:39.911 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.911 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:29:39.911 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:39.911 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.911 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:39.911 [2024-11-27 07:24:51.102047] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:39.911 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.911 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:29:39.911 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:39.911 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:39.911 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.911 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:39.911 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:39.911 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:40.172 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.172 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:29:40.172 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:29:40.172 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:40.172 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:40.172 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.172 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:40.172 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:40.172 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:40.172 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.172 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:29:40.172 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:29:40.172 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:29:40.172 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:40.172 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:40.172 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:40.172 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:40.173 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:40.173 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:29:40.173 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:40.173 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:40.173 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.173 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:40.173 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.173 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:29:40.173 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:29:40.173 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:29:40.173 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:40.173 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:29:40.173 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.173 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:40.173 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.173 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:40.173 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:40.173 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:40.173 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:40.173 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:40.173 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:29:40.173 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:40.173 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:40.173 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:40.173 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.173 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:40.173 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:40.173 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.173 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:29:40.173 07:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:29:40.745 [2024-11-27 07:24:51.774141] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:40.745 [2024-11-27 07:24:51.774180] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:40.745 [2024-11-27 07:24:51.774195] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:40.745 [2024-11-27 07:24:51.863449] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:40.745 [2024-11-27 07:24:51.921298] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:29:40.745 [2024-11-27 07:24:51.922688] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1bc9670:1 started. 00:29:40.745 [2024-11-27 07:24:51.924600] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:40.745 [2024-11-27 07:24:51.924630] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:40.745 [2024-11-27 07:24:51.932186] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1bc9670 was disconnected and freed. delete nvme_qpair. 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:41.318 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.580 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:29:41.580 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:29:41.580 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:29:41.580 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:41.580 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:29:41.580 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.580 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:41.580 [2024-11-27 07:24:52.544995] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1bc9850:1 started. 00:29:41.580 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.580 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:41.581 [2024-11-27 07:24:52.553675] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1bc9850 was disconnected and freed. delete nvme_qpair. 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:41.581 [2024-11-27 07:24:52.653996] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:41.581 [2024-11-27 07:24:52.654268] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:41.581 [2024-11-27 07:24:52.654288] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:41.581 [2024-11-27 07:24:52.740546] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:41.581 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.843 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:29:41.843 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:29:41.843 [2024-11-27 07:24:53.045979] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:29:41.843 [2024-11-27 07:24:53.046018] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:41.843 [2024-11-27 07:24:53.046027] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:41.843 [2024-11-27 07:24:53.046032] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:42.785 [2024-11-27 07:24:53.925530] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:42.785 [2024-11-27 07:24:53.925552] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:42.785 [2024-11-27 07:24:53.926295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.785 [2024-11-27 07:24:53.926311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.785 [2024-11-27 07:24:53.926320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.785 [2024-11-27 07:24:53.926328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.785 [2024-11-27 07:24:53.926336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.785 [2024-11-27 07:24:53.926344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.785 [2024-11-27 07:24:53.926356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.785 [2024-11-27 07:24:53.926364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.785 [2024-11-27 07:24:53.926371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b99c50 is same with the state(6) to be set 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:42.785 [2024-11-27 07:24:53.936303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b99c50 (9): Bad file descriptor 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:42.785 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:42.785 [2024-11-27 07:24:53.946337] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:42.785 [2024-11-27 07:24:53.946350] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:42.785 [2024-11-27 07:24:53.946355] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:42.785 [2024-11-27 07:24:53.946360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:42.785 [2024-11-27 07:24:53.946378] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:42.786 [2024-11-27 07:24:53.946697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.786 [2024-11-27 07:24:53.946712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b99c50 with addr=10.0.0.2, port=4420 00:29:42.786 [2024-11-27 07:24:53.946721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b99c50 is same with the state(6) to be set 00:29:42.786 [2024-11-27 07:24:53.946733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b99c50 (9): Bad file descriptor 00:29:42.786 [2024-11-27 07:24:53.946760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:42.786 [2024-11-27 07:24:53.946769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:42.786 [2024-11-27 07:24:53.946777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:42.786 [2024-11-27 07:24:53.946784] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:42.786 [2024-11-27 07:24:53.946789] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:42.786 [2024-11-27 07:24:53.946793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:42.786 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.786 [2024-11-27 07:24:53.956409] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:42.786 [2024-11-27 07:24:53.956420] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:42.786 [2024-11-27 07:24:53.956425] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:42.786 [2024-11-27 07:24:53.956429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:42.786 [2024-11-27 07:24:53.956443] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:42.786 [2024-11-27 07:24:53.956725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.786 [2024-11-27 07:24:53.956737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b99c50 with addr=10.0.0.2, port=4420 00:29:42.786 [2024-11-27 07:24:53.956745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b99c50 is same with the state(6) to be set 00:29:42.786 [2024-11-27 07:24:53.956756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b99c50 (9): Bad file descriptor 00:29:42.786 [2024-11-27 07:24:53.956766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:42.786 [2024-11-27 07:24:53.956773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:42.786 [2024-11-27 07:24:53.956780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:42.786 [2024-11-27 07:24:53.956786] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:42.786 [2024-11-27 07:24:53.956791] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:42.786 [2024-11-27 07:24:53.956796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:42.786 [2024-11-27 07:24:53.966475] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:42.786 [2024-11-27 07:24:53.966488] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:42.786 [2024-11-27 07:24:53.966492] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:42.786 [2024-11-27 07:24:53.966497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:42.786 [2024-11-27 07:24:53.966511] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:42.786 [2024-11-27 07:24:53.966791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.786 [2024-11-27 07:24:53.966803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b99c50 with addr=10.0.0.2, port=4420 00:29:42.786 [2024-11-27 07:24:53.966811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b99c50 is same with the state(6) to be set 00:29:42.786 [2024-11-27 07:24:53.966822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b99c50 (9): Bad file descriptor 00:29:42.786 [2024-11-27 07:24:53.966838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:42.786 [2024-11-27 07:24:53.966845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:42.786 [2024-11-27 07:24:53.966853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:42.786 [2024-11-27 07:24:53.966859] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:42.786 [2024-11-27 07:24:53.966867] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:42.786 [2024-11-27 07:24:53.966871] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:42.786 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.786 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:42.786 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:42.786 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:42.786 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:42.786 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:42.786 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:42.786 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:29:42.786 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:42.786 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.786 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:42.786 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:42.786 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:42.786 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:42.786 [2024-11-27 07:24:53.976542] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:42.786 [2024-11-27 07:24:53.976555] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:42.786 [2024-11-27 07:24:53.976560] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:42.786 [2024-11-27 07:24:53.976565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:42.786 [2024-11-27 07:24:53.976579] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:42.786 [2024-11-27 07:24:53.976770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.786 [2024-11-27 07:24:53.976782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b99c50 with addr=10.0.0.2, port=4420 00:29:42.786 [2024-11-27 07:24:53.976790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b99c50 is same with the state(6) to be set 00:29:42.786 [2024-11-27 07:24:53.976801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b99c50 (9): Bad file descriptor 00:29:42.786 [2024-11-27 07:24:53.976811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:42.786 [2024-11-27 07:24:53.976819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:42.786 [2024-11-27 07:24:53.976826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:42.786 [2024-11-27 07:24:53.976832] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:42.786 [2024-11-27 07:24:53.976837] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:42.786 [2024-11-27 07:24:53.976841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:42.786 [2024-11-27 07:24:53.986611] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:42.786 [2024-11-27 07:24:53.986631] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:42.786 [2024-11-27 07:24:53.986635] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:42.786 [2024-11-27 07:24:53.986640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:42.786 [2024-11-27 07:24:53.986655] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:42.786 [2024-11-27 07:24:53.986936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.786 [2024-11-27 07:24:53.986948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b99c50 with addr=10.0.0.2, port=4420 00:29:42.786 [2024-11-27 07:24:53.986955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b99c50 is same with the state(6) to be set 00:29:42.786 [2024-11-27 07:24:53.986967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b99c50 (9): Bad file descriptor 00:29:42.786 [2024-11-27 07:24:53.986990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:42.786 [2024-11-27 07:24:53.986998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:42.786 [2024-11-27 07:24:53.987005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:42.786 [2024-11-27 07:24:53.987012] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:42.786 [2024-11-27 07:24:53.987017] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:42.786 [2024-11-27 07:24:53.987021] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:43.047 [2024-11-27 07:24:53.996686] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:43.047 [2024-11-27 07:24:53.996698] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:43.047 [2024-11-27 07:24:53.996703] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:43.047 [2024-11-27 07:24:53.996708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:43.047 [2024-11-27 07:24:53.996721] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:43.047 [2024-11-27 07:24:53.997000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.047 [2024-11-27 07:24:53.997011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b99c50 with addr=10.0.0.2, port=4420 00:29:43.047 [2024-11-27 07:24:53.997019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b99c50 is same with the state(6) to be set 00:29:43.047 [2024-11-27 07:24:53.997029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b99c50 (9): Bad file descriptor 00:29:43.047 [2024-11-27 07:24:53.997045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:43.047 [2024-11-27 07:24:53.997052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:43.047 [2024-11-27 07:24:53.997059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:43.047 [2024-11-27 07:24:53.997065] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:43.047 [2024-11-27 07:24:53.997070] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:43.047 [2024-11-27 07:24:53.997074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:43.047 [2024-11-27 07:24:54.006753] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:43.047 [2024-11-27 07:24:54.006764] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:43.047 [2024-11-27 07:24:54.006768] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:43.047 [2024-11-27 07:24:54.006773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:43.047 [2024-11-27 07:24:54.006787] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:43.047 [2024-11-27 07:24:54.007114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.047 [2024-11-27 07:24:54.007125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b99c50 with addr=10.0.0.2, port=4420 00:29:43.047 [2024-11-27 07:24:54.007133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b99c50 is same with the state(6) to be set 00:29:43.047 [2024-11-27 07:24:54.007143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b99c50 (9): Bad file descriptor 00:29:43.047 [2024-11-27 07:24:54.007171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:43.048 [2024-11-27 07:24:54.007180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:43.048 [2024-11-27 07:24:54.007187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:43.048 [2024-11-27 07:24:54.007193] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:43.048 [2024-11-27 07:24:54.007198] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:43.048 [2024-11-27 07:24:54.007202] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:43.048 [2024-11-27 07:24:54.012181] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:29:43.048 [2024-11-27 07:24:54.012198] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:43.048 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.309 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:29:43.309 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:29:43.309 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:29:43.309 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:43.309 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:43.309 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.309 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:44.249 [2024-11-27 07:24:55.322079] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:44.249 [2024-11-27 07:24:55.322099] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:44.249 [2024-11-27 07:24:55.322108] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:44.249 [2024-11-27 07:24:55.411358] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:29:44.820 [2024-11-27 07:24:55.719615] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:29:44.820 [2024-11-27 07:24:55.720273] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1cf9210:1 started. 00:29:44.820 [2024-11-27 07:24:55.721591] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:44.820 [2024-11-27 07:24:55.721613] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:44.820 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:44.821 [2024-11-27 07:24:55.731209] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1cf9210 was disconnected and freed. delete nvme_qpair. 00:29:44.821 request: 00:29:44.821 { 00:29:44.821 "name": "nvme", 00:29:44.821 "trtype": "tcp", 00:29:44.821 "traddr": "10.0.0.2", 00:29:44.821 "adrfam": "ipv4", 00:29:44.821 "trsvcid": "8009", 00:29:44.821 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:44.821 "wait_for_attach": true, 00:29:44.821 "method": "bdev_nvme_start_discovery", 00:29:44.821 "req_id": 1 00:29:44.821 } 00:29:44.821 Got JSON-RPC error response 00:29:44.821 response: 00:29:44.821 { 00:29:44.821 "code": -17, 00:29:44.821 "message": "File exists" 00:29:44.821 } 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:44.821 request: 00:29:44.821 { 00:29:44.821 "name": "nvme_second", 00:29:44.821 "trtype": "tcp", 00:29:44.821 "traddr": "10.0.0.2", 00:29:44.821 "adrfam": "ipv4", 00:29:44.821 "trsvcid": "8009", 00:29:44.821 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:44.821 "wait_for_attach": true, 00:29:44.821 "method": "bdev_nvme_start_discovery", 00:29:44.821 "req_id": 1 00:29:44.821 } 00:29:44.821 Got JSON-RPC error response 00:29:44.821 response: 00:29:44.821 { 00:29:44.821 "code": -17, 00:29:44.821 "message": "File exists" 00:29:44.821 } 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:44.821 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:29:44.822 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.822 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:29:44.822 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:29:44.822 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:44.822 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:44.822 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.822 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:44.822 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:44.822 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:44.822 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.822 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:44.822 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:44.822 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:29:44.822 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:44.822 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:44.822 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:44.822 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:44.822 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:44.822 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:44.822 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.822 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:46.207 [2024-11-27 07:24:56.977458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.207 [2024-11-27 07:24:56.977480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf70e0 with addr=10.0.0.2, port=8010 00:29:46.207 [2024-11-27 07:24:56.977489] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:46.207 [2024-11-27 07:24:56.977494] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:46.207 [2024-11-27 07:24:56.977499] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:46.777 [2024-11-27 07:24:57.979761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.777 [2024-11-27 07:24:57.979779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf70e0 with addr=10.0.0.2, port=8010 00:29:46.777 [2024-11-27 07:24:57.979787] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:46.777 [2024-11-27 07:24:57.979792] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:46.777 [2024-11-27 07:24:57.979797] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:48.164 [2024-11-27 07:24:58.981802] bdev_nvme.c:7527:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:29:48.164 request: 00:29:48.164 { 00:29:48.164 "name": "nvme_second", 00:29:48.164 "trtype": "tcp", 00:29:48.164 "traddr": "10.0.0.2", 00:29:48.164 "adrfam": "ipv4", 00:29:48.164 "trsvcid": "8010", 00:29:48.164 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:48.164 "wait_for_attach": false, 00:29:48.164 "attach_timeout_ms": 3000, 00:29:48.164 "method": "bdev_nvme_start_discovery", 00:29:48.164 "req_id": 1 00:29:48.164 } 00:29:48.164 Got JSON-RPC error response 00:29:48.164 response: 00:29:48.164 { 00:29:48.164 "code": -110, 00:29:48.164 "message": "Connection timed out" 00:29:48.164 } 00:29:48.164 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:48.164 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:29:48.164 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:48.164 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:48.164 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:48.164 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:29:48.164 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:48.164 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:48.164 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.164 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:48.164 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:29:48.164 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:29:48.164 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.164 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:29:48.164 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:29:48.164 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2518655 00:29:48.164 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:29:48.164 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:48.164 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:29:48.164 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:48.164 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:29:48.164 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:48.164 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:48.164 rmmod nvme_tcp 00:29:48.164 rmmod nvme_fabrics 00:29:48.164 rmmod nvme_keyring 00:29:48.164 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:48.164 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:29:48.164 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:29:48.164 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2518370 ']' 00:29:48.164 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2518370 00:29:48.164 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2518370 ']' 00:29:48.164 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2518370 00:29:48.164 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:29:48.164 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:48.164 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2518370 00:29:48.164 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:48.164 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:48.164 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2518370' 00:29:48.164 killing process with pid 2518370 00:29:48.164 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2518370 00:29:48.164 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2518370 00:29:48.164 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:48.164 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:48.164 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:48.164 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:29:48.164 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:29:48.164 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:48.164 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:29:48.164 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:48.164 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:48.164 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.164 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:48.164 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.711 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:50.711 00:29:50.711 real 0m20.232s 00:29:50.711 user 0m23.515s 00:29:50.711 sys 0m7.099s 00:29:50.711 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:50.711 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:50.711 ************************************ 00:29:50.711 END TEST nvmf_host_discovery 00:29:50.711 ************************************ 00:29:50.711 07:25:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:29:50.711 07:25:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:50.711 07:25:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:50.711 07:25:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.711 ************************************ 00:29:50.711 START TEST nvmf_host_multipath_status 00:29:50.711 ************************************ 00:29:50.711 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:29:50.711 * Looking for test storage... 00:29:50.711 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:50.711 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:50.711 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:29:50.711 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:50.711 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:50.711 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:50.711 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:50.711 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:50.711 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:29:50.711 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:29:50.711 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:29:50.711 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:29:50.711 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:29:50.711 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:29:50.711 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:29:50.711 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:50.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.712 --rc genhtml_branch_coverage=1 00:29:50.712 --rc genhtml_function_coverage=1 00:29:50.712 --rc genhtml_legend=1 00:29:50.712 --rc geninfo_all_blocks=1 00:29:50.712 --rc geninfo_unexecuted_blocks=1 00:29:50.712 00:29:50.712 ' 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:50.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.712 --rc genhtml_branch_coverage=1 00:29:50.712 --rc genhtml_function_coverage=1 00:29:50.712 --rc genhtml_legend=1 00:29:50.712 --rc geninfo_all_blocks=1 00:29:50.712 --rc geninfo_unexecuted_blocks=1 00:29:50.712 00:29:50.712 ' 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:50.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.712 --rc genhtml_branch_coverage=1 00:29:50.712 --rc genhtml_function_coverage=1 00:29:50.712 --rc genhtml_legend=1 00:29:50.712 --rc geninfo_all_blocks=1 00:29:50.712 --rc geninfo_unexecuted_blocks=1 00:29:50.712 00:29:50.712 ' 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:50.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.712 --rc genhtml_branch_coverage=1 00:29:50.712 --rc genhtml_function_coverage=1 00:29:50.712 --rc genhtml_legend=1 00:29:50.712 --rc geninfo_all_blocks=1 00:29:50.712 --rc geninfo_unexecuted_blocks=1 00:29:50.712 00:29:50.712 ' 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:50.712 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:50.712 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.713 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:50.713 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.713 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:50.713 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:50.713 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:29:50.713 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:58.858 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:58.858 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:58.858 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:58.858 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:58.858 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:58.858 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:58.858 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:58.858 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:58.859 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:58.859 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:58.859 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:58.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:58.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:29:58.859 00:29:58.859 --- 10.0.0.2 ping statistics --- 00:29:58.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:58.859 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:29:58.859 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:58.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:58.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:29:58.859 00:29:58.859 --- 10.0.0.1 ping statistics --- 00:29:58.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:58.859 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:29:58.859 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:58.859 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:29:58.859 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:58.859 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:58.859 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:58.859 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:58.859 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:58.859 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:58.859 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:58.859 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:29:58.859 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:58.859 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:58.859 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:58.859 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2524681 00:29:58.859 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2524681 00:29:58.859 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:29:58.859 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2524681 ']' 00:29:58.859 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:58.859 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:58.859 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:58.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:58.859 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:58.859 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:58.859 [2024-11-27 07:25:09.265465] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:29:58.859 [2024-11-27 07:25:09.265535] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:58.859 [2024-11-27 07:25:09.382872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:58.859 [2024-11-27 07:25:09.434663] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:58.859 [2024-11-27 07:25:09.434715] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:58.859 [2024-11-27 07:25:09.434725] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:58.859 [2024-11-27 07:25:09.434732] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:58.859 [2024-11-27 07:25:09.434738] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:58.859 [2024-11-27 07:25:09.436309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:58.859 [2024-11-27 07:25:09.436358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.120 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:59.120 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:29:59.120 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:59.120 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:59.120 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:59.120 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:59.120 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2524681 00:29:59.120 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:59.120 [2024-11-27 07:25:10.300954] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:59.382 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:59.382 Malloc0 00:29:59.382 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:29:59.644 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:59.905 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:59.905 [2024-11-27 07:25:11.081076] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:59.905 07:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:00.167 [2024-11-27 07:25:11.265557] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:00.167 07:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2525166 00:30:00.167 07:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:30:00.167 07:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:00.167 07:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2525166 /var/tmp/bdevperf.sock 00:30:00.167 07:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2525166 ']' 00:30:00.167 07:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:00.167 07:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:00.167 07:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:00.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:00.167 07:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:00.167 07:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:01.182 07:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:01.182 07:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:30:01.182 07:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:01.182 07:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:30:01.475 Nvme0n1 00:30:01.475 07:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:30:01.756 Nvme0n1 00:30:01.756 07:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:30:01.756 07:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:30:04.305 07:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:30:04.305 07:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:04.305 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:04.305 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:30:05.247 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:30:05.247 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:05.247 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:05.247 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:05.507 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:05.507 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:05.507 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:05.507 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:05.507 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:05.507 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:05.507 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:05.507 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:05.768 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:05.768 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:05.768 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:05.768 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:06.032 07:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:06.032 07:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:06.032 07:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:06.032 07:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:06.293 07:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:06.293 07:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:06.293 07:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:06.293 07:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:06.293 07:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:06.293 07:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:30:06.293 07:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:06.553 07:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:06.813 07:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:30:07.755 07:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:30:07.755 07:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:07.755 07:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:07.755 07:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:08.015 07:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:08.015 07:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:08.015 07:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:08.015 07:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:08.015 07:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:08.015 07:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:08.015 07:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:08.015 07:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:08.275 07:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:08.275 07:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:08.275 07:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:08.275 07:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:08.535 07:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:08.535 07:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:08.535 07:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:08.535 07:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:08.535 07:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:08.535 07:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:08.535 07:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:08.535 07:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:08.796 07:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:08.796 07:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:30:08.796 07:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:09.057 07:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:30:09.057 07:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:30:10.441 07:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:30:10.441 07:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:10.442 07:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:10.442 07:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:10.442 07:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:10.442 07:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:10.442 07:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:10.442 07:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:10.442 07:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:10.442 07:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:10.442 07:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:10.442 07:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:10.702 07:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:10.702 07:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:10.702 07:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:10.702 07:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:10.963 07:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:10.963 07:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:10.963 07:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:10.963 07:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:10.963 07:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:10.963 07:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:10.963 07:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:10.963 07:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:11.223 07:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:11.223 07:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:30:11.223 07:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:11.482 07:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:11.743 07:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:30:12.684 07:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:30:12.685 07:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:12.685 07:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:12.685 07:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:12.945 07:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:12.945 07:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:12.945 07:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:12.945 07:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:12.945 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:12.945 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:12.945 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:12.945 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:13.206 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:13.206 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:13.206 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:13.206 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:13.466 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:13.466 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:13.466 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:13.466 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:13.466 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:13.466 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:13.466 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:13.466 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:13.727 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:13.727 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:30:13.727 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:13.988 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:13.988 07:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:30:15.371 07:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:30:15.371 07:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:15.371 07:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:15.372 07:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:15.372 07:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:15.372 07:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:15.372 07:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:15.372 07:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:15.372 07:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:15.372 07:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:15.372 07:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:15.372 07:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:15.630 07:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:15.630 07:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:15.630 07:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:15.630 07:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:15.889 07:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:15.889 07:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:15.889 07:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:15.889 07:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:15.889 07:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:15.889 07:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:15.889 07:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:15.889 07:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:16.147 07:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:16.147 07:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:30:16.147 07:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:16.405 07:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:16.686 07:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:30:17.623 07:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:30:17.623 07:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:17.623 07:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:17.623 07:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:17.623 07:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:17.623 07:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:17.623 07:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:17.883 07:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:17.883 07:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:17.883 07:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:17.883 07:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:17.883 07:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:18.140 07:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:18.140 07:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:18.140 07:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:18.140 07:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:18.400 07:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:18.400 07:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:18.400 07:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:18.400 07:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:18.400 07:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:18.400 07:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:18.400 07:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:18.400 07:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:18.660 07:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:18.660 07:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:30:18.920 07:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:30:18.921 07:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:18.921 07:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:19.181 07:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:30:20.121 07:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:30:20.121 07:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:20.121 07:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:20.121 07:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:20.381 07:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:20.381 07:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:20.381 07:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:20.381 07:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:20.642 07:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:20.642 07:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:20.642 07:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:20.642 07:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:20.902 07:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:20.902 07:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:20.903 07:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:20.903 07:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:20.903 07:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:20.903 07:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:20.903 07:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:20.903 07:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:21.163 07:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:21.163 07:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:21.163 07:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.163 07:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:21.424 07:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:21.424 07:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:30:21.424 07:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:21.424 07:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:21.684 07:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:30:22.625 07:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:30:22.625 07:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:22.625 07:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:22.625 07:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:22.887 07:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:22.887 07:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:22.887 07:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:22.887 07:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:23.147 07:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:23.147 07:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:23.147 07:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:23.148 07:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:23.148 07:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:23.148 07:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:23.148 07:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:23.148 07:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:23.408 07:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:23.408 07:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:23.408 07:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:23.408 07:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:23.670 07:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:23.670 07:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:23.670 07:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:23.670 07:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:23.670 07:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:23.670 07:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:30:23.670 07:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:23.930 07:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:30:24.191 07:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:30:25.136 07:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:30:25.136 07:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:25.136 07:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:25.136 07:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:25.397 07:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:25.397 07:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:25.397 07:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:25.397 07:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:25.657 07:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:25.657 07:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:25.657 07:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:25.657 07:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:25.657 07:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:25.657 07:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:25.657 07:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:25.657 07:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:25.918 07:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:25.918 07:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:25.918 07:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:25.918 07:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:26.179 07:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:26.179 07:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:26.179 07:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:26.179 07:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:26.179 07:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:26.179 07:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:30:26.179 07:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:26.440 07:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:26.700 07:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:30:27.641 07:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:30:27.641 07:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:27.641 07:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:27.641 07:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:27.903 07:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:27.903 07:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:27.903 07:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:27.903 07:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:27.903 07:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:27.903 07:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:27.903 07:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:27.903 07:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:28.162 07:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:28.162 07:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:28.162 07:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:28.162 07:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:28.423 07:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:28.423 07:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:28.423 07:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:28.423 07:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:28.683 07:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:28.683 07:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:28.683 07:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:28.683 07:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:28.683 07:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:28.683 07:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2525166 00:30:28.683 07:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2525166 ']' 00:30:28.683 07:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2525166 00:30:28.683 07:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:30:28.683 07:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:28.684 07:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2525166 00:30:28.948 07:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:30:28.948 07:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:30:28.948 07:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2525166' 00:30:28.948 killing process with pid 2525166 00:30:28.948 07:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2525166 00:30:28.948 07:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2525166 00:30:28.948 { 00:30:28.948 "results": [ 00:30:28.948 { 00:30:28.948 "job": "Nvme0n1", 00:30:28.948 "core_mask": "0x4", 00:30:28.948 "workload": "verify", 00:30:28.948 "status": "terminated", 00:30:28.948 "verify_range": { 00:30:28.948 "start": 0, 00:30:28.948 "length": 16384 00:30:28.948 }, 00:30:28.948 "queue_depth": 128, 00:30:28.948 "io_size": 4096, 00:30:28.948 "runtime": 26.812596, 00:30:28.948 "iops": 12053.737728342307, 00:30:28.948 "mibps": 47.084913001337135, 00:30:28.948 "io_failed": 0, 00:30:28.948 "io_timeout": 0, 00:30:28.948 "avg_latency_us": 10599.847819335462, 00:30:28.948 "min_latency_us": 450.56, 00:30:28.948 "max_latency_us": 3019898.88 00:30:28.948 } 00:30:28.948 ], 00:30:28.948 "core_count": 1 00:30:28.948 } 00:30:28.948 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2525166 00:30:28.949 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:28.949 [2024-11-27 07:25:11.339192] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:30:28.949 [2024-11-27 07:25:11.339273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2525166 ] 00:30:28.949 [2024-11-27 07:25:11.431710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.949 [2024-11-27 07:25:11.482792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:28.949 Running I/O for 90 seconds... 00:30:28.949 9991.00 IOPS, 39.03 MiB/s [2024-11-27T06:25:40.154Z] 10827.50 IOPS, 42.29 MiB/s [2024-11-27T06:25:40.154Z] 11507.00 IOPS, 44.95 MiB/s [2024-11-27T06:25:40.154Z] 11872.50 IOPS, 46.38 MiB/s [2024-11-27T06:25:40.154Z] 12112.00 IOPS, 47.31 MiB/s [2024-11-27T06:25:40.154Z] 12261.83 IOPS, 47.90 MiB/s [2024-11-27T06:25:40.154Z] 12344.57 IOPS, 48.22 MiB/s [2024-11-27T06:25:40.154Z] 12451.00 IOPS, 48.64 MiB/s [2024-11-27T06:25:40.154Z] 12497.78 IOPS, 48.82 MiB/s [2024-11-27T06:25:40.154Z] 12543.60 IOPS, 49.00 MiB/s [2024-11-27T06:25:40.154Z] 12591.45 IOPS, 49.19 MiB/s [2024-11-27T06:25:40.154Z] [2024-11-27 07:25:24.984621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.949 [2024-11-27 07:25:24.984654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:28.949 [2024-11-27 07:25:24.984686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.949 [2024-11-27 07:25:24.984693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:28.949 [2024-11-27 07:25:24.984704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.949 [2024-11-27 07:25:24.984709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:28.949 [2024-11-27 07:25:24.984720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.949 [2024-11-27 07:25:24.984725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:28.949 [2024-11-27 07:25:24.984736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.949 [2024-11-27 07:25:24.984741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:28.949 [2024-11-27 07:25:24.984751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.949 [2024-11-27 07:25:24.984757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.949 [2024-11-27 07:25:24.984767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.949 [2024-11-27 07:25:24.984772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:28.949 [2024-11-27 07:25:24.984782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.949 [2024-11-27 07:25:24.984788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:28.949 [2024-11-27 07:25:24.984798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.949 [2024-11-27 07:25:24.984803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:28.949 [2024-11-27 07:25:24.984814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.949 [2024-11-27 07:25:24.984824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:28.949 [2024-11-27 07:25:24.984835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.949 [2024-11-27 07:25:24.984840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:28.949 [2024-11-27 07:25:24.984850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.949 [2024-11-27 07:25:24.984855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:28.949 [2024-11-27 07:25:24.984866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.949 [2024-11-27 07:25:24.984871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:28.949 [2024-11-27 07:25:24.984881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.949 [2024-11-27 07:25:24.984886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:28.949 [2024-11-27 07:25:24.984897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.949 [2024-11-27 07:25:24.984902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:28.949 [2024-11-27 07:25:24.984912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.949 [2024-11-27 07:25:24.984918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:28.949 [2024-11-27 07:25:24.985252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.949 [2024-11-27 07:25:24.985261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:28.949 [2024-11-27 07:25:24.985273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.949 [2024-11-27 07:25:24.985278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:28.949 [2024-11-27 07:25:24.985289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.949 [2024-11-27 07:25:24.985295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:28.949 [2024-11-27 07:25:24.985306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:23344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.949 [2024-11-27 07:25:24.985311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:28.949 [2024-11-27 07:25:24.985323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.949 [2024-11-27 07:25:24.985328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:28.949 [2024-11-27 07:25:24.985339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.949 [2024-11-27 07:25:24.985344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:28.949 [2024-11-27 07:25:24.985358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.949 [2024-11-27 07:25:24.985363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:28.949 [2024-11-27 07:25:24.985375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.949 [2024-11-27 07:25:24.985380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:28.949 [2024-11-27 07:25:24.985392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.949 [2024-11-27 07:25:24.985397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:28.949 [2024-11-27 07:25:24.985409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.949 [2024-11-27 07:25:24.985414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:28.949 [2024-11-27 07:25:24.985425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.949 [2024-11-27 07:25:24.985430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:28.949 [2024-11-27 07:25:24.985442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.949 [2024-11-27 07:25:24.985447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:28.949 [2024-11-27 07:25:24.985458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.949 [2024-11-27 07:25:24.985463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:28.949 [2024-11-27 07:25:24.985474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.949 [2024-11-27 07:25:24.985479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:28.949 [2024-11-27 07:25:24.985490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.949 [2024-11-27 07:25:24.985494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:28.949 [2024-11-27 07:25:24.985505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.949 [2024-11-27 07:25:24.985510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:28.949 [2024-11-27 07:25:24.985521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.949 [2024-11-27 07:25:24.985526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:28.949 [2024-11-27 07:25:24.985538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.950 [2024-11-27 07:25:24.985543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:28.950 [2024-11-27 07:25:24.985556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.950 [2024-11-27 07:25:24.985561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:28.950 [2024-11-27 07:25:24.985572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.950 [2024-11-27 07:25:24.985577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:28.950 [2024-11-27 07:25:24.985588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.950 [2024-11-27 07:25:24.985593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:28.950 [2024-11-27 07:25:24.985604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.950 [2024-11-27 07:25:24.985609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.950 [2024-11-27 07:25:24.985622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.950 [2024-11-27 07:25:24.985627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:28.950 [2024-11-27 07:25:24.985638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.950 [2024-11-27 07:25:24.985643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:28.950 [2024-11-27 07:25:24.985654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.950 [2024-11-27 07:25:24.985659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:28.950 [2024-11-27 07:25:24.985670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.950 [2024-11-27 07:25:24.985675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:28.950 [2024-11-27 07:25:24.985687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.950 [2024-11-27 07:25:24.985692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:28.950 [2024-11-27 07:25:24.985703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.950 [2024-11-27 07:25:24.985708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:28.950 [2024-11-27 07:25:24.985719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.950 [2024-11-27 07:25:24.985724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:28.950 [2024-11-27 07:25:24.985735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.950 [2024-11-27 07:25:24.985740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:28.950 [2024-11-27 07:25:24.985752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.950 [2024-11-27 07:25:24.985758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:28.950 [2024-11-27 07:25:24.985769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.950 [2024-11-27 07:25:24.985774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:28.950 [2024-11-27 07:25:24.985785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.950 [2024-11-27 07:25:24.985790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:28.950 [2024-11-27 07:25:24.985801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.950 [2024-11-27 07:25:24.985806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:28.950 [2024-11-27 07:25:24.985817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.950 [2024-11-27 07:25:24.985823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:28.950 [2024-11-27 07:25:24.985834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.950 [2024-11-27 07:25:24.985838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:28.950 [2024-11-27 07:25:24.985849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.950 [2024-11-27 07:25:24.985855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:28.950 [2024-11-27 07:25:24.985866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.950 [2024-11-27 07:25:24.985871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:28.950 [2024-11-27 07:25:24.985882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.950 [2024-11-27 07:25:24.985886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:28.950 [2024-11-27 07:25:24.985898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.950 [2024-11-27 07:25:24.985902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:28.950 [2024-11-27 07:25:24.985914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.950 [2024-11-27 07:25:24.985919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:28.950 [2024-11-27 07:25:24.985930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.950 [2024-11-27 07:25:24.985934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:28.950 [2024-11-27 07:25:24.985945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.950 [2024-11-27 07:25:24.985952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:28.950 [2024-11-27 07:25:24.985963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.950 [2024-11-27 07:25:24.985967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:28.950 [2024-11-27 07:25:24.985978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.950 [2024-11-27 07:25:24.985984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:28.950 [2024-11-27 07:25:24.985995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.950 [2024-11-27 07:25:24.986000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:28.950 [2024-11-27 07:25:24.986010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.950 [2024-11-27 07:25:24.986015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:28.950 [2024-11-27 07:25:24.986026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.950 [2024-11-27 07:25:24.986032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:28.950 [2024-11-27 07:25:24.986043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.950 [2024-11-27 07:25:24.986047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:28.950 [2024-11-27 07:25:24.986059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.950 [2024-11-27 07:25:24.986064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:28.950 [2024-11-27 07:25:24.986075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.950 [2024-11-27 07:25:24.986080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:28.950 [2024-11-27 07:25:24.986092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.950 [2024-11-27 07:25:24.986097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:28.950 [2024-11-27 07:25:24.986108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.950 [2024-11-27 07:25:24.986113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:28.950 [2024-11-27 07:25:24.986124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.950 [2024-11-27 07:25:24.986129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.950 [2024-11-27 07:25:24.986140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.950 [2024-11-27 07:25:24.986145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:28.950 [2024-11-27 07:25:24.986165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.950 [2024-11-27 07:25:24.986170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.986182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.951 [2024-11-27 07:25:24.986187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.986316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.951 [2024-11-27 07:25:24.986324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.986339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.951 [2024-11-27 07:25:24.986344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.986358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.951 [2024-11-27 07:25:24.986363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.986376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.951 [2024-11-27 07:25:24.986382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.986396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.951 [2024-11-27 07:25:24.986401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.986415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.951 [2024-11-27 07:25:24.986420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.986433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.951 [2024-11-27 07:25:24.986439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.986453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.951 [2024-11-27 07:25:24.986458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.986471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.951 [2024-11-27 07:25:24.986477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.986490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.951 [2024-11-27 07:25:24.986495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.986511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.951 [2024-11-27 07:25:24.986516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.986530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.951 [2024-11-27 07:25:24.986535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.986548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.951 [2024-11-27 07:25:24.986554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.986567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.951 [2024-11-27 07:25:24.986572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.986586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.951 [2024-11-27 07:25:24.986591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.986605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.951 [2024-11-27 07:25:24.986610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.986624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.951 [2024-11-27 07:25:24.986629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.986643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.951 [2024-11-27 07:25:24.986648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.986662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.951 [2024-11-27 07:25:24.986667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.986681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.951 [2024-11-27 07:25:24.986686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.986700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.951 [2024-11-27 07:25:24.986705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.986719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.951 [2024-11-27 07:25:24.986724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.986738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.951 [2024-11-27 07:25:24.986744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.986758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.951 [2024-11-27 07:25:24.986763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.986777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.951 [2024-11-27 07:25:24.986782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.986796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.951 [2024-11-27 07:25:24.986801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.986815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.951 [2024-11-27 07:25:24.986821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.986834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.951 [2024-11-27 07:25:24.986840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.986853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.951 [2024-11-27 07:25:24.986859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.986872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.951 [2024-11-27 07:25:24.986877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.986891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.951 [2024-11-27 07:25:24.986896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.986911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.951 [2024-11-27 07:25:24.986916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.986980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.951 [2024-11-27 07:25:24.986986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.987002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.951 [2024-11-27 07:25:24.987007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.987022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.951 [2024-11-27 07:25:24.987029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.987044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.951 [2024-11-27 07:25:24.987050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.987065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.951 [2024-11-27 07:25:24.987070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:28.951 [2024-11-27 07:25:24.987090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.951 [2024-11-27 07:25:24.987095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:28.952 [2024-11-27 07:25:24.987110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.952 [2024-11-27 07:25:24.987115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:28.952 [2024-11-27 07:25:24.987131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.952 [2024-11-27 07:25:24.987136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:28.952 [2024-11-27 07:25:24.987152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.952 [2024-11-27 07:25:24.987157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:28.952 [2024-11-27 07:25:24.987176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.952 [2024-11-27 07:25:24.987181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:28.952 [2024-11-27 07:25:24.987197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.952 [2024-11-27 07:25:24.987202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:28.952 [2024-11-27 07:25:24.987217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.952 [2024-11-27 07:25:24.987223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:28.952 [2024-11-27 07:25:24.987238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.952 [2024-11-27 07:25:24.987243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:28.952 [2024-11-27 07:25:24.987258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.952 [2024-11-27 07:25:24.987264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:28.952 [2024-11-27 07:25:24.987279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.952 [2024-11-27 07:25:24.987284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:28.952 [2024-11-27 07:25:24.987301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.952 [2024-11-27 07:25:24.987306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:28.952 [2024-11-27 07:25:24.987321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.952 [2024-11-27 07:25:24.987326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:28.952 [2024-11-27 07:25:24.987342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.952 [2024-11-27 07:25:24.987347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:28.952 [2024-11-27 07:25:24.987363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.952 [2024-11-27 07:25:24.987367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:28.952 [2024-11-27 07:25:24.987383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.952 [2024-11-27 07:25:24.987388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:28.952 [2024-11-27 07:25:24.987403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.952 [2024-11-27 07:25:24.987409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:28.952 [2024-11-27 07:25:24.987424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.952 [2024-11-27 07:25:24.987429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:28.952 [2024-11-27 07:25:24.987444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.952 [2024-11-27 07:25:24.987450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:28.952 12527.67 IOPS, 48.94 MiB/s [2024-11-27T06:25:40.157Z] 11564.00 IOPS, 45.17 MiB/s [2024-11-27T06:25:40.157Z] 10738.00 IOPS, 41.95 MiB/s [2024-11-27T06:25:40.157Z] 10099.73 IOPS, 39.45 MiB/s [2024-11-27T06:25:40.157Z] 10270.94 IOPS, 40.12 MiB/s [2024-11-27T06:25:40.157Z] 10421.94 IOPS, 40.71 MiB/s [2024-11-27T06:25:40.157Z] 10770.11 IOPS, 42.07 MiB/s [2024-11-27T06:25:40.157Z] 11088.89 IOPS, 43.32 MiB/s [2024-11-27T06:25:40.157Z] 11295.20 IOPS, 44.12 MiB/s [2024-11-27T06:25:40.157Z] 11370.62 IOPS, 44.42 MiB/s [2024-11-27T06:25:40.157Z] 11444.73 IOPS, 44.71 MiB/s [2024-11-27T06:25:40.157Z] 11656.83 IOPS, 45.53 MiB/s [2024-11-27T06:25:40.157Z] 11865.04 IOPS, 46.35 MiB/s [2024-11-27T06:25:40.157Z] [2024-11-27 07:25:37.707187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:32 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.952 [2024-11-27 07:25:37.707219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:28.952 [2024-11-27 07:25:37.707249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.952 [2024-11-27 07:25:37.707256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:28.952 [2024-11-27 07:25:37.707267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.952 [2024-11-27 07:25:37.707276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:28.952 [2024-11-27 07:25:37.707287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.952 [2024-11-27 07:25:37.707293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:28.952 [2024-11-27 07:25:37.707303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.952 [2024-11-27 07:25:37.707308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:28.952 [2024-11-27 07:25:37.707319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.952 [2024-11-27 07:25:37.707324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:28.952 [2024-11-27 07:25:37.707334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.952 [2024-11-27 07:25:37.707340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:28.952 [2024-11-27 07:25:37.707350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.952 [2024-11-27 07:25:37.707355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:28.952 [2024-11-27 07:25:37.707365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.952 [2024-11-27 07:25:37.707370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:28.952 [2024-11-27 07:25:37.707381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.952 [2024-11-27 07:25:37.707386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:28.952 [2024-11-27 07:25:37.707396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.952 [2024-11-27 07:25:37.707401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:28.952 [2024-11-27 07:25:37.707412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.952 [2024-11-27 07:25:37.707417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:28.952 [2024-11-27 07:25:37.707427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.952 [2024-11-27 07:25:37.707432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:28.952 [2024-11-27 07:25:37.707443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.952 [2024-11-27 07:25:37.707448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:28.952 [2024-11-27 07:25:37.707458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:131032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.952 [2024-11-27 07:25:37.707463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:28.952 [2024-11-27 07:25:37.707475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.952 [2024-11-27 07:25:37.707480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:28.952 [2024-11-27 07:25:37.707490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.952 [2024-11-27 07:25:37.707496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:28.952 [2024-11-27 07:25:37.707507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.952 [2024-11-27 07:25:37.707512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:28.952 [2024-11-27 07:25:37.707522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.952 [2024-11-27 07:25:37.707527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:28.952 [2024-11-27 07:25:37.707537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.953 [2024-11-27 07:25:37.707542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:28.953 [2024-11-27 07:25:37.707553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.953 [2024-11-27 07:25:37.707558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:28.953 [2024-11-27 07:25:37.707568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.953 [2024-11-27 07:25:37.707574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:28.953 [2024-11-27 07:25:37.707584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.953 [2024-11-27 07:25:37.707589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:28.953 [2024-11-27 07:25:37.707599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.953 [2024-11-27 07:25:37.707604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:28.953 [2024-11-27 07:25:37.707615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.953 [2024-11-27 07:25:37.707620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:28.953 [2024-11-27 07:25:37.707630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.953 [2024-11-27 07:25:37.707636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:28.953 [2024-11-27 07:25:37.707646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.953 [2024-11-27 07:25:37.707652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:28.953 [2024-11-27 07:25:37.707662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.953 [2024-11-27 07:25:37.707668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:28.953 [2024-11-27 07:25:37.707679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.953 [2024-11-27 07:25:37.707684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:28.953 [2024-11-27 07:25:37.707694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.953 [2024-11-27 07:25:37.707701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.953 [2024-11-27 07:25:37.707711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.953 [2024-11-27 07:25:37.707717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.953 [2024-11-27 07:25:37.707727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.953 [2024-11-27 07:25:37.707733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:28.953 [2024-11-27 07:25:37.709932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.953 [2024-11-27 07:25:37.709948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:28.953 [2024-11-27 07:25:37.709960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.953 [2024-11-27 07:25:37.709966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:28.953 [2024-11-27 07:25:37.709976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.953 [2024-11-27 07:25:37.709981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:28.953 [2024-11-27 07:25:37.709992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.953 [2024-11-27 07:25:37.709997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:28.953 [2024-11-27 07:25:37.710008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.953 [2024-11-27 07:25:37.710013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:28.953 [2024-11-27 07:25:37.710023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.953 [2024-11-27 07:25:37.710028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:28.953 [2024-11-27 07:25:37.710038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.953 [2024-11-27 07:25:37.710043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:28.953 [2024-11-27 07:25:37.710054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.953 [2024-11-27 07:25:37.710061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:28.953 [2024-11-27 07:25:37.710072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.953 [2024-11-27 07:25:37.710077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:28.953 [2024-11-27 07:25:37.710087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.953 [2024-11-27 07:25:37.710092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:28.953 [2024-11-27 07:25:37.710102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.953 [2024-11-27 07:25:37.710107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:28.953 [2024-11-27 07:25:37.710118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.953 [2024-11-27 07:25:37.710123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:28.953 [2024-11-27 07:25:37.710133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.953 [2024-11-27 07:25:37.710138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:28.953 [2024-11-27 07:25:37.710149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.953 [2024-11-27 07:25:37.710154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:28.953 [2024-11-27 07:25:37.710168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.953 [2024-11-27 07:25:37.710174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:28.953 [2024-11-27 07:25:37.710185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:131040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.953 [2024-11-27 07:25:37.710190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:28.953 12000.20 IOPS, 46.88 MiB/s [2024-11-27T06:25:40.158Z] 12028.35 IOPS, 46.99 MiB/s [2024-11-27T06:25:40.158Z] Received shutdown signal, test time was about 26.813208 seconds 00:30:28.953 00:30:28.953 Latency(us) 00:30:28.953 [2024-11-27T06:25:40.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:28.953 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:28.953 Verification LBA range: start 0x0 length 0x4000 00:30:28.953 Nvme0n1 : 26.81 12053.74 47.08 0.00 0.00 10599.85 450.56 3019898.88 00:30:28.953 [2024-11-27T06:25:40.158Z] =================================================================================================================== 00:30:28.953 [2024-11-27T06:25:40.158Z] Total : 12053.74 47.08 0.00 0.00 10599.85 450.56 3019898.88 00:30:28.953 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:29.215 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:30:29.215 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:29.215 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:30:29.215 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:29.215 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:30:29.215 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:29.215 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:30:29.215 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:29.215 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:29.215 rmmod nvme_tcp 00:30:29.215 rmmod nvme_fabrics 00:30:29.215 rmmod nvme_keyring 00:30:29.215 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:29.215 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:30:29.215 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:30:29.215 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2524681 ']' 00:30:29.215 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2524681 00:30:29.215 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2524681 ']' 00:30:29.215 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2524681 00:30:29.215 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:30:29.215 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:29.215 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2524681 00:30:29.215 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:29.215 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:29.215 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2524681' 00:30:29.215 killing process with pid 2524681 00:30:29.215 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2524681 00:30:29.215 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2524681 00:30:29.476 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:29.476 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:29.476 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:29.476 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:30:29.476 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:30:29.476 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:30:29.476 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:29.476 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:29.476 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:29.476 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:29.476 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:29.476 07:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.388 07:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:31.388 00:30:31.388 real 0m41.089s 00:30:31.388 user 1m46.079s 00:30:31.388 sys 0m11.591s 00:30:31.388 07:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:31.388 07:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:31.388 ************************************ 00:30:31.388 END TEST nvmf_host_multipath_status 00:30:31.388 ************************************ 00:30:31.388 07:25:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:31.388 07:25:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:31.388 07:25:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:31.388 07:25:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.649 ************************************ 00:30:31.649 START TEST nvmf_discovery_remove_ifc 00:30:31.649 ************************************ 00:30:31.649 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:31.649 * Looking for test storage... 00:30:31.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:31.649 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:31.649 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:30:31.649 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:31.649 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:31.649 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:31.649 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:31.649 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:31.649 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:30:31.649 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:30:31.649 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:30:31.649 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:30:31.649 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:30:31.649 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:30:31.649 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:30:31.649 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:31.649 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:30:31.649 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:30:31.649 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:31.649 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:31.649 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:30:31.649 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:30:31.649 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:31.649 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:30:31.649 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:30:31.649 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:30:31.649 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:30:31.649 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:31.649 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:30:31.649 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:30:31.649 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:31.649 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:31.649 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:30:31.649 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:31.649 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:31.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.649 --rc genhtml_branch_coverage=1 00:30:31.649 --rc genhtml_function_coverage=1 00:30:31.649 --rc genhtml_legend=1 00:30:31.649 --rc geninfo_all_blocks=1 00:30:31.649 --rc geninfo_unexecuted_blocks=1 00:30:31.649 00:30:31.649 ' 00:30:31.649 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:31.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.649 --rc genhtml_branch_coverage=1 00:30:31.649 --rc genhtml_function_coverage=1 00:30:31.649 --rc genhtml_legend=1 00:30:31.649 --rc geninfo_all_blocks=1 00:30:31.649 --rc geninfo_unexecuted_blocks=1 00:30:31.649 00:30:31.649 ' 00:30:31.649 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:31.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.649 --rc genhtml_branch_coverage=1 00:30:31.649 --rc genhtml_function_coverage=1 00:30:31.650 --rc genhtml_legend=1 00:30:31.650 --rc geninfo_all_blocks=1 00:30:31.650 --rc geninfo_unexecuted_blocks=1 00:30:31.650 00:30:31.650 ' 00:30:31.650 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:31.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.650 --rc genhtml_branch_coverage=1 00:30:31.650 --rc genhtml_function_coverage=1 00:30:31.650 --rc genhtml_legend=1 00:30:31.650 --rc geninfo_all_blocks=1 00:30:31.650 --rc geninfo_unexecuted_blocks=1 00:30:31.650 00:30:31.650 ' 00:30:31.650 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:31.650 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:30:31.650 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:31.650 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:31.650 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:31.650 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:31.650 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:31.650 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:31.650 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:31.650 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:31.650 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:31.650 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:31.650 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:31.650 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:31.650 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:31.650 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:31.650 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:31.650 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:31.650 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:31.650 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:30:31.650 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:31.650 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:31.916 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:31.916 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.916 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.916 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.916 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:30:31.916 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.916 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:30:31.916 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:31.916 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:31.916 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:31.916 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:31.916 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:31.916 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:31.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:31.916 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:31.916 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:31.916 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:31.916 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:30:31.916 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:30:31.916 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:30:31.916 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:30:31.916 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:30:31.916 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:30:31.916 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:30:31.916 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:31.916 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:31.916 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:31.916 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:31.916 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:31.916 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:31.916 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:31.916 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.916 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:31.916 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:31.916 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:30:31.916 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:40.057 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:40.057 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:40.057 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:40.057 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:40.057 07:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:40.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:40.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:30:40.058 00:30:40.058 --- 10.0.0.2 ping statistics --- 00:30:40.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:40.058 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:40.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:40.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:30:40.058 00:30:40.058 --- 10.0.0.1 ping statistics --- 00:30:40.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:40.058 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2535137 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2535137 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2535137 ']' 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:40.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:40.058 07:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:40.058 [2024-11-27 07:25:50.411321] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:30:40.058 [2024-11-27 07:25:50.411393] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:40.058 [2024-11-27 07:25:50.512475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:40.058 [2024-11-27 07:25:50.562956] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:40.058 [2024-11-27 07:25:50.563004] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:40.058 [2024-11-27 07:25:50.563013] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:40.058 [2024-11-27 07:25:50.563020] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:40.058 [2024-11-27 07:25:50.563026] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:40.058 [2024-11-27 07:25:50.563796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:40.058 07:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:40.058 07:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:30:40.058 07:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:40.058 07:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:40.058 07:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:40.319 07:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:40.319 07:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:30:40.319 07:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.319 07:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:40.319 [2024-11-27 07:25:51.282832] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:40.319 [2024-11-27 07:25:51.291088] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:40.319 null0 00:30:40.319 [2024-11-27 07:25:51.323039] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:40.319 07:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.319 07:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2535192 00:30:40.319 07:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:30:40.319 07:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2535192 /tmp/host.sock 00:30:40.319 07:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2535192 ']' 00:30:40.319 07:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:30:40.319 07:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:40.319 07:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:40.319 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:40.319 07:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:40.319 07:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:40.319 [2024-11-27 07:25:51.401173] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:30:40.319 [2024-11-27 07:25:51.401238] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2535192 ] 00:30:40.319 [2024-11-27 07:25:51.491581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:40.581 [2024-11-27 07:25:51.545798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:41.152 07:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:41.152 07:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:30:41.152 07:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:41.152 07:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:30:41.152 07:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.152 07:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:41.152 07:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.152 07:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:30:41.152 07:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.152 07:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:41.152 07:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.152 07:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:30:41.152 07:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.152 07:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:42.536 [2024-11-27 07:25:53.340960] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:42.536 [2024-11-27 07:25:53.340981] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:42.536 [2024-11-27 07:25:53.340994] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:42.536 [2024-11-27 07:25:53.429278] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:42.536 [2024-11-27 07:25:53.531169] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:30:42.536 [2024-11-27 07:25:53.532093] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2450250:1 started. 00:30:42.536 [2024-11-27 07:25:53.533647] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:42.536 [2024-11-27 07:25:53.533692] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:42.536 [2024-11-27 07:25:53.533714] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:42.536 [2024-11-27 07:25:53.533727] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:42.536 [2024-11-27 07:25:53.533747] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:42.536 07:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.536 07:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:30:42.536 07:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:42.536 07:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:42.536 [2024-11-27 07:25:53.540301] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2450250 was disconnected and freed. delete nvme_qpair. 00:30:42.536 07:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.536 07:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:42.536 07:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:42.536 07:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:42.536 07:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:42.536 07:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.536 07:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:30:42.536 07:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:30:42.536 07:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:30:42.536 07:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:30:42.536 07:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:42.536 07:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:42.536 07:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:42.536 07:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.536 07:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:42.536 07:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:42.536 07:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:42.797 07:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.797 07:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:42.797 07:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:43.737 07:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:43.737 07:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:43.737 07:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:43.737 07:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.737 07:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:43.737 07:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:43.737 07:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:43.737 07:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.737 07:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:43.737 07:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:44.679 07:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:44.679 07:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:44.679 07:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:44.679 07:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.679 07:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:44.679 07:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:44.679 07:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:44.679 07:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.679 07:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:44.679 07:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:46.063 07:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:46.063 07:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:46.063 07:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:46.063 07:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.063 07:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:46.063 07:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:46.063 07:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:46.063 07:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.063 07:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:46.063 07:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:47.005 07:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:47.005 07:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:47.005 07:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:47.005 07:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.005 07:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:47.005 07:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:47.005 07:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:47.005 07:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.005 07:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:47.005 07:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:47.947 [2024-11-27 07:25:58.974396] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:30:47.947 [2024-11-27 07:25:58.974430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:47.947 [2024-11-27 07:25:58.974439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.947 [2024-11-27 07:25:58.974446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:47.947 [2024-11-27 07:25:58.974452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.947 [2024-11-27 07:25:58.974458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:47.947 [2024-11-27 07:25:58.974463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.947 [2024-11-27 07:25:58.974468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:47.947 [2024-11-27 07:25:58.974473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.947 [2024-11-27 07:25:58.974483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:47.947 [2024-11-27 07:25:58.974488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.947 [2024-11-27 07:25:58.974494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ca50 is same with the state(6) to be set 00:30:47.947 [2024-11-27 07:25:58.984419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x242ca50 (9): Bad file descriptor 00:30:47.947 07:25:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:47.947 07:25:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:47.947 07:25:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:47.947 07:25:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.947 07:25:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:47.947 07:25:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:47.947 07:25:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:47.948 [2024-11-27 07:25:58.994450] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:47.948 [2024-11-27 07:25:58.994460] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:47.948 [2024-11-27 07:25:58.994463] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:47.948 [2024-11-27 07:25:58.994467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:47.948 [2024-11-27 07:25:58.994482] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:48.981 [2024-11-27 07:26:00.027435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:30:48.981 [2024-11-27 07:26:00.027524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x242ca50 with addr=10.0.0.2, port=4420 00:30:48.981 [2024-11-27 07:26:00.027556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ca50 is same with the state(6) to be set 00:30:48.981 [2024-11-27 07:26:00.027611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x242ca50 (9): Bad file descriptor 00:30:48.981 [2024-11-27 07:26:00.028736] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:30:48.981 [2024-11-27 07:26:00.028808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:48.981 [2024-11-27 07:26:00.028831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:48.981 [2024-11-27 07:26:00.028855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:48.981 [2024-11-27 07:26:00.028876] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:48.981 [2024-11-27 07:26:00.028892] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:48.981 [2024-11-27 07:26:00.028906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:48.981 [2024-11-27 07:26:00.028928] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:48.981 [2024-11-27 07:26:00.028943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:48.981 07:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.981 07:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:48.981 07:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:49.923 [2024-11-27 07:26:01.031365] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:49.923 [2024-11-27 07:26:01.031383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:49.923 [2024-11-27 07:26:01.031393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:49.923 [2024-11-27 07:26:01.031399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:49.923 [2024-11-27 07:26:01.031405] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:30:49.923 [2024-11-27 07:26:01.031410] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:49.923 [2024-11-27 07:26:01.031414] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:49.923 [2024-11-27 07:26:01.031417] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:49.923 [2024-11-27 07:26:01.031435] bdev_nvme.c:7235:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:30:49.923 [2024-11-27 07:26:01.031455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.923 [2024-11-27 07:26:01.031462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.923 [2024-11-27 07:26:01.031470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.923 [2024-11-27 07:26:01.031475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.923 [2024-11-27 07:26:01.031481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.924 [2024-11-27 07:26:01.031486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.924 [2024-11-27 07:26:01.031492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.924 [2024-11-27 07:26:01.031497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.924 [2024-11-27 07:26:01.031503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.924 [2024-11-27 07:26:01.031508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.924 [2024-11-27 07:26:01.031513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:30:49.924 [2024-11-27 07:26:01.031831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241c1a0 (9): Bad file descriptor 00:30:49.924 [2024-11-27 07:26:01.032842] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:30:49.924 [2024-11-27 07:26:01.032849] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:30:49.924 07:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:49.924 07:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:49.924 07:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:49.924 07:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.924 07:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:49.924 07:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:49.924 07:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:49.924 07:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.924 07:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:30:49.924 07:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:49.924 07:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:50.184 07:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:30:50.184 07:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:50.185 07:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:50.185 07:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:50.185 07:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.185 07:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:50.185 07:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:50.185 07:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:50.185 07:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.185 07:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:50.185 07:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:51.127 07:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:51.127 07:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:51.127 07:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:51.127 07:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.127 07:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:51.127 07:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:51.127 07:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:51.127 07:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.127 07:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:51.127 07:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:52.071 [2024-11-27 07:26:03.087342] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:52.071 [2024-11-27 07:26:03.087356] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:52.071 [2024-11-27 07:26:03.087366] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:52.071 [2024-11-27 07:26:03.174609] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:30:52.332 07:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:52.332 07:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:52.332 07:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:52.332 07:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.332 07:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:52.332 07:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:52.333 07:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:52.333 07:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.333 [2024-11-27 07:26:03.356613] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:30:52.333 [2024-11-27 07:26:03.357144] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x24599c0:1 started. 00:30:52.333 [2024-11-27 07:26:03.358040] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:52.333 [2024-11-27 07:26:03.358068] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:52.333 [2024-11-27 07:26:03.358082] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:52.333 [2024-11-27 07:26:03.358094] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:30:52.333 [2024-11-27 07:26:03.358100] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:52.333 07:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:52.333 07:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:52.333 [2024-11-27 07:26:03.364102] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x24599c0 was disconnected and freed. delete nvme_qpair. 00:30:53.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:53.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:53.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:53.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:53.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:53.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:53.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:30:53.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:30:53.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2535192 00:30:53.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2535192 ']' 00:30:53.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2535192 00:30:53.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:30:53.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:53.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2535192 00:30:53.537 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:53.537 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:53.537 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2535192' 00:30:53.537 killing process with pid 2535192 00:30:53.537 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2535192 00:30:53.537 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2535192 00:30:53.537 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:30:53.537 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:53.537 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:30:53.537 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:53.537 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:30:53.537 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:53.537 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:53.537 rmmod nvme_tcp 00:30:53.537 rmmod nvme_fabrics 00:30:53.537 rmmod nvme_keyring 00:30:53.537 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:53.537 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:30:53.537 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:30:53.537 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2535137 ']' 00:30:53.537 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2535137 00:30:53.537 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2535137 ']' 00:30:53.537 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2535137 00:30:53.537 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:30:53.537 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:53.537 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2535137 00:30:53.537 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:53.537 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:53.537 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2535137' 00:30:53.537 killing process with pid 2535137 00:30:53.537 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2535137 00:30:53.537 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2535137 00:30:53.798 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:53.798 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:53.798 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:53.798 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:30:53.798 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:30:53.798 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:53.798 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:30:53.798 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:53.798 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:53.798 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:53.798 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:53.798 07:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:55.713 07:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:55.713 00:30:55.713 real 0m24.269s 00:30:55.713 user 0m29.294s 00:30:55.713 sys 0m7.062s 00:30:55.713 07:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:55.713 07:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:55.713 ************************************ 00:30:55.713 END TEST nvmf_discovery_remove_ifc 00:30:55.713 ************************************ 00:30:55.975 07:26:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:30:55.975 07:26:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:55.975 07:26:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:55.975 07:26:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.975 ************************************ 00:30:55.975 START TEST nvmf_identify_kernel_target 00:30:55.975 ************************************ 00:30:55.975 07:26:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:30:55.975 * Looking for test storage... 00:30:55.975 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:55.975 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:55.975 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:30:55.975 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:55.975 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:55.975 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:55.975 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:55.975 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:55.975 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:30:55.975 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:30:55.975 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:30:55.975 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:30:55.975 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:30:55.975 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:30:55.975 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:30:55.975 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:55.975 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:30:55.975 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:30:55.975 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:55.975 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:55.975 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:30:55.975 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:30:55.976 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:55.976 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:30:55.976 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:30:55.976 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:30:55.976 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:30:55.976 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:55.976 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:30:55.976 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:30:55.976 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:55.976 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:55.976 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:30:55.976 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:55.976 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:55.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.976 --rc genhtml_branch_coverage=1 00:30:55.976 --rc genhtml_function_coverage=1 00:30:55.976 --rc genhtml_legend=1 00:30:55.976 --rc geninfo_all_blocks=1 00:30:55.976 --rc geninfo_unexecuted_blocks=1 00:30:55.976 00:30:55.976 ' 00:30:55.976 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:55.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.976 --rc genhtml_branch_coverage=1 00:30:55.976 --rc genhtml_function_coverage=1 00:30:55.976 --rc genhtml_legend=1 00:30:55.976 --rc geninfo_all_blocks=1 00:30:55.976 --rc geninfo_unexecuted_blocks=1 00:30:55.976 00:30:55.976 ' 00:30:55.976 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:55.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.976 --rc genhtml_branch_coverage=1 00:30:55.976 --rc genhtml_function_coverage=1 00:30:55.976 --rc genhtml_legend=1 00:30:55.976 --rc geninfo_all_blocks=1 00:30:55.976 --rc geninfo_unexecuted_blocks=1 00:30:55.976 00:30:55.976 ' 00:30:55.976 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:55.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.976 --rc genhtml_branch_coverage=1 00:30:55.976 --rc genhtml_function_coverage=1 00:30:55.976 --rc genhtml_legend=1 00:30:55.976 --rc geninfo_all_blocks=1 00:30:55.976 --rc geninfo_unexecuted_blocks=1 00:30:55.976 00:30:55.976 ' 00:30:55.976 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:55.976 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:30:56.239 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:56.239 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:56.239 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:56.239 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:56.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:30:56.240 07:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:04.390 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:04.391 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:04.391 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:04.391 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:04.391 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:04.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:04.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:31:04.391 00:31:04.391 --- 10.0.0.2 ping statistics --- 00:31:04.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.391 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:31:04.391 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:04.392 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:04.392 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:31:04.392 00:31:04.392 --- 10.0.0.1 ping statistics --- 00:31:04.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.392 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:31:04.392 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:04.392 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:31:04.392 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:04.392 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:04.392 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:04.392 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:04.392 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:04.392 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:04.392 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:04.392 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:31:04.392 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:31:04.392 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:31:04.392 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:04.392 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:04.392 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:04.392 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:04.392 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:04.392 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:04.392 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:04.392 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:04.392 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:04.392 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:31:04.392 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:04.392 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:04.392 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:31:04.392 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:04.392 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:04.392 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:04.392 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:31:04.392 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:31:04.392 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:31:04.392 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:04.392 07:26:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:07.697 Waiting for block devices as requested 00:31:07.697 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:07.697 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:07.697 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:07.697 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:07.697 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:07.697 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:07.697 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:07.697 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:07.958 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:07.958 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:08.219 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:08.219 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:08.219 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:08.478 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:08.478 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:08.478 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:08.478 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:09.050 07:26:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:31:09.050 07:26:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:09.050 07:26:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:31:09.050 07:26:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:31:09.050 07:26:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:09.050 07:26:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:31:09.050 07:26:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:31:09.050 07:26:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:31:09.050 07:26:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:09.050 No valid GPT data, bailing 00:31:09.050 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:09.050 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:31:09.050 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:31:09.050 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:31:09.050 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:31:09.050 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:09.050 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:09.050 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:09.050 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:09.050 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:31:09.050 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:31:09.050 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:31:09.050 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:31:09.050 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:31:09.050 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:31:09.050 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:31:09.050 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:09.050 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:31:09.050 00:31:09.050 Discovery Log Number of Records 2, Generation counter 2 00:31:09.050 =====Discovery Log Entry 0====== 00:31:09.050 trtype: tcp 00:31:09.050 adrfam: ipv4 00:31:09.050 subtype: current discovery subsystem 00:31:09.050 treq: not specified, sq flow control disable supported 00:31:09.050 portid: 1 00:31:09.050 trsvcid: 4420 00:31:09.050 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:09.050 traddr: 10.0.0.1 00:31:09.050 eflags: none 00:31:09.050 sectype: none 00:31:09.050 =====Discovery Log Entry 1====== 00:31:09.050 trtype: tcp 00:31:09.050 adrfam: ipv4 00:31:09.050 subtype: nvme subsystem 00:31:09.050 treq: not specified, sq flow control disable supported 00:31:09.050 portid: 1 00:31:09.050 trsvcid: 4420 00:31:09.050 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:09.050 traddr: 10.0.0.1 00:31:09.050 eflags: none 00:31:09.050 sectype: none 00:31:09.050 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:31:09.050 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:31:09.313 ===================================================== 00:31:09.313 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:09.313 ===================================================== 00:31:09.313 Controller Capabilities/Features 00:31:09.313 ================================ 00:31:09.313 Vendor ID: 0000 00:31:09.313 Subsystem Vendor ID: 0000 00:31:09.313 Serial Number: ec2197bc5681ec656d32 00:31:09.313 Model Number: Linux 00:31:09.313 Firmware Version: 6.8.9-20 00:31:09.313 Recommended Arb Burst: 0 00:31:09.313 IEEE OUI Identifier: 00 00 00 00:31:09.313 Multi-path I/O 00:31:09.313 May have multiple subsystem ports: No 00:31:09.313 May have multiple controllers: No 00:31:09.313 Associated with SR-IOV VF: No 00:31:09.313 Max Data Transfer Size: Unlimited 00:31:09.313 Max Number of Namespaces: 0 00:31:09.313 Max Number of I/O Queues: 1024 00:31:09.313 NVMe Specification Version (VS): 1.3 00:31:09.313 NVMe Specification Version (Identify): 1.3 00:31:09.313 Maximum Queue Entries: 1024 00:31:09.313 Contiguous Queues Required: No 00:31:09.313 Arbitration Mechanisms Supported 00:31:09.313 Weighted Round Robin: Not Supported 00:31:09.313 Vendor Specific: Not Supported 00:31:09.313 Reset Timeout: 7500 ms 00:31:09.313 Doorbell Stride: 4 bytes 00:31:09.313 NVM Subsystem Reset: Not Supported 00:31:09.313 Command Sets Supported 00:31:09.313 NVM Command Set: Supported 00:31:09.313 Boot Partition: Not Supported 00:31:09.313 Memory Page Size Minimum: 4096 bytes 00:31:09.313 Memory Page Size Maximum: 4096 bytes 00:31:09.313 Persistent Memory Region: Not Supported 00:31:09.313 Optional Asynchronous Events Supported 00:31:09.313 Namespace Attribute Notices: Not Supported 00:31:09.313 Firmware Activation Notices: Not Supported 00:31:09.313 ANA Change Notices: Not Supported 00:31:09.313 PLE Aggregate Log Change Notices: Not Supported 00:31:09.313 LBA Status Info Alert Notices: Not Supported 00:31:09.313 EGE Aggregate Log Change Notices: Not Supported 00:31:09.313 Normal NVM Subsystem Shutdown event: Not Supported 00:31:09.313 Zone Descriptor Change Notices: Not Supported 00:31:09.313 Discovery Log Change Notices: Supported 00:31:09.313 Controller Attributes 00:31:09.313 128-bit Host Identifier: Not Supported 00:31:09.313 Non-Operational Permissive Mode: Not Supported 00:31:09.313 NVM Sets: Not Supported 00:31:09.313 Read Recovery Levels: Not Supported 00:31:09.313 Endurance Groups: Not Supported 00:31:09.313 Predictable Latency Mode: Not Supported 00:31:09.313 Traffic Based Keep ALive: Not Supported 00:31:09.313 Namespace Granularity: Not Supported 00:31:09.313 SQ Associations: Not Supported 00:31:09.313 UUID List: Not Supported 00:31:09.313 Multi-Domain Subsystem: Not Supported 00:31:09.313 Fixed Capacity Management: Not Supported 00:31:09.313 Variable Capacity Management: Not Supported 00:31:09.313 Delete Endurance Group: Not Supported 00:31:09.313 Delete NVM Set: Not Supported 00:31:09.313 Extended LBA Formats Supported: Not Supported 00:31:09.313 Flexible Data Placement Supported: Not Supported 00:31:09.313 00:31:09.313 Controller Memory Buffer Support 00:31:09.313 ================================ 00:31:09.313 Supported: No 00:31:09.313 00:31:09.313 Persistent Memory Region Support 00:31:09.313 ================================ 00:31:09.313 Supported: No 00:31:09.313 00:31:09.313 Admin Command Set Attributes 00:31:09.313 ============================ 00:31:09.313 Security Send/Receive: Not Supported 00:31:09.313 Format NVM: Not Supported 00:31:09.313 Firmware Activate/Download: Not Supported 00:31:09.313 Namespace Management: Not Supported 00:31:09.313 Device Self-Test: Not Supported 00:31:09.313 Directives: Not Supported 00:31:09.313 NVMe-MI: Not Supported 00:31:09.313 Virtualization Management: Not Supported 00:31:09.313 Doorbell Buffer Config: Not Supported 00:31:09.313 Get LBA Status Capability: Not Supported 00:31:09.313 Command & Feature Lockdown Capability: Not Supported 00:31:09.313 Abort Command Limit: 1 00:31:09.313 Async Event Request Limit: 1 00:31:09.313 Number of Firmware Slots: N/A 00:31:09.313 Firmware Slot 1 Read-Only: N/A 00:31:09.313 Firmware Activation Without Reset: N/A 00:31:09.313 Multiple Update Detection Support: N/A 00:31:09.313 Firmware Update Granularity: No Information Provided 00:31:09.313 Per-Namespace SMART Log: No 00:31:09.313 Asymmetric Namespace Access Log Page: Not Supported 00:31:09.313 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:09.313 Command Effects Log Page: Not Supported 00:31:09.313 Get Log Page Extended Data: Supported 00:31:09.313 Telemetry Log Pages: Not Supported 00:31:09.313 Persistent Event Log Pages: Not Supported 00:31:09.313 Supported Log Pages Log Page: May Support 00:31:09.313 Commands Supported & Effects Log Page: Not Supported 00:31:09.313 Feature Identifiers & Effects Log Page:May Support 00:31:09.313 NVMe-MI Commands & Effects Log Page: May Support 00:31:09.313 Data Area 4 for Telemetry Log: Not Supported 00:31:09.313 Error Log Page Entries Supported: 1 00:31:09.313 Keep Alive: Not Supported 00:31:09.313 00:31:09.313 NVM Command Set Attributes 00:31:09.313 ========================== 00:31:09.313 Submission Queue Entry Size 00:31:09.313 Max: 1 00:31:09.313 Min: 1 00:31:09.313 Completion Queue Entry Size 00:31:09.313 Max: 1 00:31:09.313 Min: 1 00:31:09.313 Number of Namespaces: 0 00:31:09.313 Compare Command: Not Supported 00:31:09.313 Write Uncorrectable Command: Not Supported 00:31:09.313 Dataset Management Command: Not Supported 00:31:09.313 Write Zeroes Command: Not Supported 00:31:09.313 Set Features Save Field: Not Supported 00:31:09.313 Reservations: Not Supported 00:31:09.313 Timestamp: Not Supported 00:31:09.313 Copy: Not Supported 00:31:09.313 Volatile Write Cache: Not Present 00:31:09.313 Atomic Write Unit (Normal): 1 00:31:09.313 Atomic Write Unit (PFail): 1 00:31:09.313 Atomic Compare & Write Unit: 1 00:31:09.313 Fused Compare & Write: Not Supported 00:31:09.313 Scatter-Gather List 00:31:09.313 SGL Command Set: Supported 00:31:09.313 SGL Keyed: Not Supported 00:31:09.313 SGL Bit Bucket Descriptor: Not Supported 00:31:09.313 SGL Metadata Pointer: Not Supported 00:31:09.313 Oversized SGL: Not Supported 00:31:09.313 SGL Metadata Address: Not Supported 00:31:09.313 SGL Offset: Supported 00:31:09.313 Transport SGL Data Block: Not Supported 00:31:09.313 Replay Protected Memory Block: Not Supported 00:31:09.313 00:31:09.313 Firmware Slot Information 00:31:09.313 ========================= 00:31:09.313 Active slot: 0 00:31:09.313 00:31:09.313 00:31:09.313 Error Log 00:31:09.313 ========= 00:31:09.313 00:31:09.313 Active Namespaces 00:31:09.313 ================= 00:31:09.313 Discovery Log Page 00:31:09.313 ================== 00:31:09.313 Generation Counter: 2 00:31:09.313 Number of Records: 2 00:31:09.313 Record Format: 0 00:31:09.313 00:31:09.313 Discovery Log Entry 0 00:31:09.313 ---------------------- 00:31:09.313 Transport Type: 3 (TCP) 00:31:09.313 Address Family: 1 (IPv4) 00:31:09.313 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:09.313 Entry Flags: 00:31:09.313 Duplicate Returned Information: 0 00:31:09.313 Explicit Persistent Connection Support for Discovery: 0 00:31:09.313 Transport Requirements: 00:31:09.313 Secure Channel: Not Specified 00:31:09.313 Port ID: 1 (0x0001) 00:31:09.313 Controller ID: 65535 (0xffff) 00:31:09.313 Admin Max SQ Size: 32 00:31:09.313 Transport Service Identifier: 4420 00:31:09.313 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:09.313 Transport Address: 10.0.0.1 00:31:09.313 Discovery Log Entry 1 00:31:09.313 ---------------------- 00:31:09.313 Transport Type: 3 (TCP) 00:31:09.313 Address Family: 1 (IPv4) 00:31:09.314 Subsystem Type: 2 (NVM Subsystem) 00:31:09.314 Entry Flags: 00:31:09.314 Duplicate Returned Information: 0 00:31:09.314 Explicit Persistent Connection Support for Discovery: 0 00:31:09.314 Transport Requirements: 00:31:09.314 Secure Channel: Not Specified 00:31:09.314 Port ID: 1 (0x0001) 00:31:09.314 Controller ID: 65535 (0xffff) 00:31:09.314 Admin Max SQ Size: 32 00:31:09.314 Transport Service Identifier: 4420 00:31:09.314 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:31:09.314 Transport Address: 10.0.0.1 00:31:09.314 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:09.314 get_feature(0x01) failed 00:31:09.314 get_feature(0x02) failed 00:31:09.314 get_feature(0x04) failed 00:31:09.314 ===================================================== 00:31:09.314 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:09.314 ===================================================== 00:31:09.314 Controller Capabilities/Features 00:31:09.314 ================================ 00:31:09.314 Vendor ID: 0000 00:31:09.314 Subsystem Vendor ID: 0000 00:31:09.314 Serial Number: ff21db4efe2c4a532d38 00:31:09.314 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:31:09.314 Firmware Version: 6.8.9-20 00:31:09.314 Recommended Arb Burst: 6 00:31:09.314 IEEE OUI Identifier: 00 00 00 00:31:09.314 Multi-path I/O 00:31:09.314 May have multiple subsystem ports: Yes 00:31:09.314 May have multiple controllers: Yes 00:31:09.314 Associated with SR-IOV VF: No 00:31:09.314 Max Data Transfer Size: Unlimited 00:31:09.314 Max Number of Namespaces: 1024 00:31:09.314 Max Number of I/O Queues: 128 00:31:09.314 NVMe Specification Version (VS): 1.3 00:31:09.314 NVMe Specification Version (Identify): 1.3 00:31:09.314 Maximum Queue Entries: 1024 00:31:09.314 Contiguous Queues Required: No 00:31:09.314 Arbitration Mechanisms Supported 00:31:09.314 Weighted Round Robin: Not Supported 00:31:09.314 Vendor Specific: Not Supported 00:31:09.314 Reset Timeout: 7500 ms 00:31:09.314 Doorbell Stride: 4 bytes 00:31:09.314 NVM Subsystem Reset: Not Supported 00:31:09.314 Command Sets Supported 00:31:09.314 NVM Command Set: Supported 00:31:09.314 Boot Partition: Not Supported 00:31:09.314 Memory Page Size Minimum: 4096 bytes 00:31:09.314 Memory Page Size Maximum: 4096 bytes 00:31:09.314 Persistent Memory Region: Not Supported 00:31:09.314 Optional Asynchronous Events Supported 00:31:09.314 Namespace Attribute Notices: Supported 00:31:09.314 Firmware Activation Notices: Not Supported 00:31:09.314 ANA Change Notices: Supported 00:31:09.314 PLE Aggregate Log Change Notices: Not Supported 00:31:09.314 LBA Status Info Alert Notices: Not Supported 00:31:09.314 EGE Aggregate Log Change Notices: Not Supported 00:31:09.314 Normal NVM Subsystem Shutdown event: Not Supported 00:31:09.314 Zone Descriptor Change Notices: Not Supported 00:31:09.314 Discovery Log Change Notices: Not Supported 00:31:09.314 Controller Attributes 00:31:09.314 128-bit Host Identifier: Supported 00:31:09.314 Non-Operational Permissive Mode: Not Supported 00:31:09.314 NVM Sets: Not Supported 00:31:09.314 Read Recovery Levels: Not Supported 00:31:09.314 Endurance Groups: Not Supported 00:31:09.314 Predictable Latency Mode: Not Supported 00:31:09.314 Traffic Based Keep ALive: Supported 00:31:09.314 Namespace Granularity: Not Supported 00:31:09.314 SQ Associations: Not Supported 00:31:09.314 UUID List: Not Supported 00:31:09.314 Multi-Domain Subsystem: Not Supported 00:31:09.314 Fixed Capacity Management: Not Supported 00:31:09.314 Variable Capacity Management: Not Supported 00:31:09.314 Delete Endurance Group: Not Supported 00:31:09.314 Delete NVM Set: Not Supported 00:31:09.314 Extended LBA Formats Supported: Not Supported 00:31:09.314 Flexible Data Placement Supported: Not Supported 00:31:09.314 00:31:09.314 Controller Memory Buffer Support 00:31:09.314 ================================ 00:31:09.314 Supported: No 00:31:09.314 00:31:09.314 Persistent Memory Region Support 00:31:09.314 ================================ 00:31:09.314 Supported: No 00:31:09.314 00:31:09.314 Admin Command Set Attributes 00:31:09.314 ============================ 00:31:09.314 Security Send/Receive: Not Supported 00:31:09.314 Format NVM: Not Supported 00:31:09.314 Firmware Activate/Download: Not Supported 00:31:09.314 Namespace Management: Not Supported 00:31:09.314 Device Self-Test: Not Supported 00:31:09.314 Directives: Not Supported 00:31:09.314 NVMe-MI: Not Supported 00:31:09.314 Virtualization Management: Not Supported 00:31:09.314 Doorbell Buffer Config: Not Supported 00:31:09.314 Get LBA Status Capability: Not Supported 00:31:09.314 Command & Feature Lockdown Capability: Not Supported 00:31:09.314 Abort Command Limit: 4 00:31:09.314 Async Event Request Limit: 4 00:31:09.314 Number of Firmware Slots: N/A 00:31:09.314 Firmware Slot 1 Read-Only: N/A 00:31:09.314 Firmware Activation Without Reset: N/A 00:31:09.314 Multiple Update Detection Support: N/A 00:31:09.314 Firmware Update Granularity: No Information Provided 00:31:09.314 Per-Namespace SMART Log: Yes 00:31:09.314 Asymmetric Namespace Access Log Page: Supported 00:31:09.314 ANA Transition Time : 10 sec 00:31:09.314 00:31:09.314 Asymmetric Namespace Access Capabilities 00:31:09.314 ANA Optimized State : Supported 00:31:09.314 ANA Non-Optimized State : Supported 00:31:09.314 ANA Inaccessible State : Supported 00:31:09.314 ANA Persistent Loss State : Supported 00:31:09.314 ANA Change State : Supported 00:31:09.314 ANAGRPID is not changed : No 00:31:09.314 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:31:09.314 00:31:09.314 ANA Group Identifier Maximum : 128 00:31:09.314 Number of ANA Group Identifiers : 128 00:31:09.314 Max Number of Allowed Namespaces : 1024 00:31:09.314 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:31:09.314 Command Effects Log Page: Supported 00:31:09.314 Get Log Page Extended Data: Supported 00:31:09.314 Telemetry Log Pages: Not Supported 00:31:09.314 Persistent Event Log Pages: Not Supported 00:31:09.314 Supported Log Pages Log Page: May Support 00:31:09.314 Commands Supported & Effects Log Page: Not Supported 00:31:09.314 Feature Identifiers & Effects Log Page:May Support 00:31:09.314 NVMe-MI Commands & Effects Log Page: May Support 00:31:09.314 Data Area 4 for Telemetry Log: Not Supported 00:31:09.314 Error Log Page Entries Supported: 128 00:31:09.314 Keep Alive: Supported 00:31:09.314 Keep Alive Granularity: 1000 ms 00:31:09.314 00:31:09.314 NVM Command Set Attributes 00:31:09.314 ========================== 00:31:09.314 Submission Queue Entry Size 00:31:09.314 Max: 64 00:31:09.314 Min: 64 00:31:09.314 Completion Queue Entry Size 00:31:09.314 Max: 16 00:31:09.314 Min: 16 00:31:09.314 Number of Namespaces: 1024 00:31:09.314 Compare Command: Not Supported 00:31:09.314 Write Uncorrectable Command: Not Supported 00:31:09.314 Dataset Management Command: Supported 00:31:09.314 Write Zeroes Command: Supported 00:31:09.314 Set Features Save Field: Not Supported 00:31:09.314 Reservations: Not Supported 00:31:09.314 Timestamp: Not Supported 00:31:09.314 Copy: Not Supported 00:31:09.314 Volatile Write Cache: Present 00:31:09.314 Atomic Write Unit (Normal): 1 00:31:09.314 Atomic Write Unit (PFail): 1 00:31:09.314 Atomic Compare & Write Unit: 1 00:31:09.314 Fused Compare & Write: Not Supported 00:31:09.314 Scatter-Gather List 00:31:09.314 SGL Command Set: Supported 00:31:09.314 SGL Keyed: Not Supported 00:31:09.314 SGL Bit Bucket Descriptor: Not Supported 00:31:09.314 SGL Metadata Pointer: Not Supported 00:31:09.314 Oversized SGL: Not Supported 00:31:09.314 SGL Metadata Address: Not Supported 00:31:09.314 SGL Offset: Supported 00:31:09.314 Transport SGL Data Block: Not Supported 00:31:09.314 Replay Protected Memory Block: Not Supported 00:31:09.314 00:31:09.314 Firmware Slot Information 00:31:09.314 ========================= 00:31:09.314 Active slot: 0 00:31:09.314 00:31:09.314 Asymmetric Namespace Access 00:31:09.314 =========================== 00:31:09.314 Change Count : 0 00:31:09.314 Number of ANA Group Descriptors : 1 00:31:09.314 ANA Group Descriptor : 0 00:31:09.314 ANA Group ID : 1 00:31:09.314 Number of NSID Values : 1 00:31:09.314 Change Count : 0 00:31:09.314 ANA State : 1 00:31:09.314 Namespace Identifier : 1 00:31:09.314 00:31:09.314 Commands Supported and Effects 00:31:09.314 ============================== 00:31:09.314 Admin Commands 00:31:09.314 -------------- 00:31:09.314 Get Log Page (02h): Supported 00:31:09.314 Identify (06h): Supported 00:31:09.315 Abort (08h): Supported 00:31:09.315 Set Features (09h): Supported 00:31:09.315 Get Features (0Ah): Supported 00:31:09.315 Asynchronous Event Request (0Ch): Supported 00:31:09.315 Keep Alive (18h): Supported 00:31:09.315 I/O Commands 00:31:09.315 ------------ 00:31:09.315 Flush (00h): Supported 00:31:09.315 Write (01h): Supported LBA-Change 00:31:09.315 Read (02h): Supported 00:31:09.315 Write Zeroes (08h): Supported LBA-Change 00:31:09.315 Dataset Management (09h): Supported 00:31:09.315 00:31:09.315 Error Log 00:31:09.315 ========= 00:31:09.315 Entry: 0 00:31:09.315 Error Count: 0x3 00:31:09.315 Submission Queue Id: 0x0 00:31:09.315 Command Id: 0x5 00:31:09.315 Phase Bit: 0 00:31:09.315 Status Code: 0x2 00:31:09.315 Status Code Type: 0x0 00:31:09.315 Do Not Retry: 1 00:31:09.315 Error Location: 0x28 00:31:09.315 LBA: 0x0 00:31:09.315 Namespace: 0x0 00:31:09.315 Vendor Log Page: 0x0 00:31:09.315 ----------- 00:31:09.315 Entry: 1 00:31:09.315 Error Count: 0x2 00:31:09.315 Submission Queue Id: 0x0 00:31:09.315 Command Id: 0x5 00:31:09.315 Phase Bit: 0 00:31:09.315 Status Code: 0x2 00:31:09.315 Status Code Type: 0x0 00:31:09.315 Do Not Retry: 1 00:31:09.315 Error Location: 0x28 00:31:09.315 LBA: 0x0 00:31:09.315 Namespace: 0x0 00:31:09.315 Vendor Log Page: 0x0 00:31:09.315 ----------- 00:31:09.315 Entry: 2 00:31:09.315 Error Count: 0x1 00:31:09.315 Submission Queue Id: 0x0 00:31:09.315 Command Id: 0x4 00:31:09.315 Phase Bit: 0 00:31:09.315 Status Code: 0x2 00:31:09.315 Status Code Type: 0x0 00:31:09.315 Do Not Retry: 1 00:31:09.315 Error Location: 0x28 00:31:09.315 LBA: 0x0 00:31:09.315 Namespace: 0x0 00:31:09.315 Vendor Log Page: 0x0 00:31:09.315 00:31:09.315 Number of Queues 00:31:09.315 ================ 00:31:09.315 Number of I/O Submission Queues: 128 00:31:09.315 Number of I/O Completion Queues: 128 00:31:09.315 00:31:09.315 ZNS Specific Controller Data 00:31:09.315 ============================ 00:31:09.315 Zone Append Size Limit: 0 00:31:09.315 00:31:09.315 00:31:09.315 Active Namespaces 00:31:09.315 ================= 00:31:09.315 get_feature(0x05) failed 00:31:09.315 Namespace ID:1 00:31:09.315 Command Set Identifier: NVM (00h) 00:31:09.315 Deallocate: Supported 00:31:09.315 Deallocated/Unwritten Error: Not Supported 00:31:09.315 Deallocated Read Value: Unknown 00:31:09.315 Deallocate in Write Zeroes: Not Supported 00:31:09.315 Deallocated Guard Field: 0xFFFF 00:31:09.315 Flush: Supported 00:31:09.315 Reservation: Not Supported 00:31:09.315 Namespace Sharing Capabilities: Multiple Controllers 00:31:09.315 Size (in LBAs): 3750748848 (1788GiB) 00:31:09.315 Capacity (in LBAs): 3750748848 (1788GiB) 00:31:09.315 Utilization (in LBAs): 3750748848 (1788GiB) 00:31:09.315 UUID: a17e85da-2de9-4ca7-9ae4-f998bff534c6 00:31:09.315 Thin Provisioning: Not Supported 00:31:09.315 Per-NS Atomic Units: Yes 00:31:09.315 Atomic Write Unit (Normal): 8 00:31:09.315 Atomic Write Unit (PFail): 8 00:31:09.315 Preferred Write Granularity: 8 00:31:09.315 Atomic Compare & Write Unit: 8 00:31:09.315 Atomic Boundary Size (Normal): 0 00:31:09.315 Atomic Boundary Size (PFail): 0 00:31:09.315 Atomic Boundary Offset: 0 00:31:09.315 NGUID/EUI64 Never Reused: No 00:31:09.315 ANA group ID: 1 00:31:09.315 Namespace Write Protected: No 00:31:09.315 Number of LBA Formats: 1 00:31:09.315 Current LBA Format: LBA Format #00 00:31:09.315 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:09.315 00:31:09.315 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:31:09.315 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:09.315 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:31:09.315 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:09.315 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:31:09.315 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:09.315 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:09.315 rmmod nvme_tcp 00:31:09.315 rmmod nvme_fabrics 00:31:09.315 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:09.315 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:31:09.315 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:31:09.315 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:09.315 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:09.315 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:09.315 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:09.315 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:31:09.315 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:31:09.315 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:09.315 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:31:09.315 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:09.315 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:09.315 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:09.315 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:09.315 07:26:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:11.863 07:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:11.863 07:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:31:11.863 07:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:11.863 07:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:31:11.863 07:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:11.863 07:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:11.863 07:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:11.863 07:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:11.863 07:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:31:11.863 07:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:31:11.863 07:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:15.165 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:15.165 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:15.165 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:15.165 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:15.165 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:15.165 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:15.165 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:15.165 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:15.165 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:15.165 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:15.165 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:15.165 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:15.165 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:15.165 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:15.165 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:15.165 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:15.165 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:31:15.735 00:31:15.735 real 0m19.707s 00:31:15.735 user 0m5.369s 00:31:15.735 sys 0m11.347s 00:31:15.735 07:26:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:15.735 07:26:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:15.735 ************************************ 00:31:15.735 END TEST nvmf_identify_kernel_target 00:31:15.735 ************************************ 00:31:15.735 07:26:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:15.735 07:26:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:15.735 07:26:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:15.735 07:26:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.735 ************************************ 00:31:15.735 START TEST nvmf_auth_host 00:31:15.735 ************************************ 00:31:15.735 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:15.735 * Looking for test storage... 00:31:15.735 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:15.735 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:15.735 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:31:15.735 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:15.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.997 --rc genhtml_branch_coverage=1 00:31:15.997 --rc genhtml_function_coverage=1 00:31:15.997 --rc genhtml_legend=1 00:31:15.997 --rc geninfo_all_blocks=1 00:31:15.997 --rc geninfo_unexecuted_blocks=1 00:31:15.997 00:31:15.997 ' 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:15.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.997 --rc genhtml_branch_coverage=1 00:31:15.997 --rc genhtml_function_coverage=1 00:31:15.997 --rc genhtml_legend=1 00:31:15.997 --rc geninfo_all_blocks=1 00:31:15.997 --rc geninfo_unexecuted_blocks=1 00:31:15.997 00:31:15.997 ' 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:15.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.997 --rc genhtml_branch_coverage=1 00:31:15.997 --rc genhtml_function_coverage=1 00:31:15.997 --rc genhtml_legend=1 00:31:15.997 --rc geninfo_all_blocks=1 00:31:15.997 --rc geninfo_unexecuted_blocks=1 00:31:15.997 00:31:15.997 ' 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:15.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.997 --rc genhtml_branch_coverage=1 00:31:15.997 --rc genhtml_function_coverage=1 00:31:15.997 --rc genhtml_legend=1 00:31:15.997 --rc geninfo_all_blocks=1 00:31:15.997 --rc geninfo_unexecuted_blocks=1 00:31:15.997 00:31:15.997 ' 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:15.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:15.997 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:15.997 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:31:15.997 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:31:15.997 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:31:15.997 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:31:15.997 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:15.997 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:15.998 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:31:15.998 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:31:15.998 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:31:15.998 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:15.998 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:15.998 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:15.998 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:15.998 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:15.998 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:15.998 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:15.998 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:15.998 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:15.998 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:15.998 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:15.998 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:24.143 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:24.143 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:24.143 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:24.144 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:24.144 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:24.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:24.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:31:24.144 00:31:24.144 --- 10.0.0.2 ping statistics --- 00:31:24.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:24.144 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:24.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:24.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:31:24.144 00:31:24.144 --- 10.0.0.1 ping statistics --- 00:31:24.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:24.144 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2549720 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2549720 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2549720 ']' 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:24.144 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.406 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:24.406 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=eeaf27bab71c969667e2fc518ffbaa50 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.eg7 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key eeaf27bab71c969667e2fc518ffbaa50 0 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 eeaf27bab71c969667e2fc518ffbaa50 0 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=eeaf27bab71c969667e2fc518ffbaa50 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.eg7 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.eg7 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.eg7 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=48c853c145a5bf7519ad018c2e550ebc2222528634643fb28e9f407d077698a0 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.4QM 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 48c853c145a5bf7519ad018c2e550ebc2222528634643fb28e9f407d077698a0 3 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 48c853c145a5bf7519ad018c2e550ebc2222528634643fb28e9f407d077698a0 3 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=48c853c145a5bf7519ad018c2e550ebc2222528634643fb28e9f407d077698a0 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.4QM 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.4QM 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.4QM 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:24.407 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f400e98e990b2062dcbfe81a0efc5c3617aa4c3f7a4f4d4a 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.7on 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f400e98e990b2062dcbfe81a0efc5c3617aa4c3f7a4f4d4a 0 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f400e98e990b2062dcbfe81a0efc5c3617aa4c3f7a4f4d4a 0 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f400e98e990b2062dcbfe81a0efc5c3617aa4c3f7a4f4d4a 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.7on 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.7on 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.7on 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ed4eb591fb4412f9e5425f7b12a38907c97e4e58312c05e7 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.NLy 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ed4eb591fb4412f9e5425f7b12a38907c97e4e58312c05e7 2 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ed4eb591fb4412f9e5425f7b12a38907c97e4e58312c05e7 2 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ed4eb591fb4412f9e5425f7b12a38907c97e4e58312c05e7 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.NLy 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.NLy 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.NLy 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f557b433bb756885e9e5a37720058e9c 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.3yL 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f557b433bb756885e9e5a37720058e9c 1 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f557b433bb756885e9e5a37720058e9c 1 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f557b433bb756885e9e5a37720058e9c 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.3yL 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.3yL 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.3yL 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d25492a8c326e897aca4d3d8bfc95659 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.hCE 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d25492a8c326e897aca4d3d8bfc95659 1 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d25492a8c326e897aca4d3d8bfc95659 1 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d25492a8c326e897aca4d3d8bfc95659 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.hCE 00:31:24.669 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.hCE 00:31:24.930 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.hCE 00:31:24.930 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:31:24.930 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:24.930 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:24.930 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:24.930 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:31:24.930 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:31:24.930 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:24.931 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=76f8f27cfa080cff8e0e968d13d6644958f84b248debf12b 00:31:24.931 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:31:24.931 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ate 00:31:24.931 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 76f8f27cfa080cff8e0e968d13d6644958f84b248debf12b 2 00:31:24.931 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 76f8f27cfa080cff8e0e968d13d6644958f84b248debf12b 2 00:31:24.931 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:24.931 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:24.931 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=76f8f27cfa080cff8e0e968d13d6644958f84b248debf12b 00:31:24.931 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:31:24.931 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:24.931 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ate 00:31:24.931 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ate 00:31:24.931 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.ate 00:31:24.931 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:31:24.931 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:24.931 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:24.931 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:24.931 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:31:24.931 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:31:24.931 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:24.931 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9f4ebb3018a997721e9bec83c6d18366 00:31:24.931 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:31:24.931 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.6BY 00:31:24.931 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9f4ebb3018a997721e9bec83c6d18366 0 00:31:24.931 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9f4ebb3018a997721e9bec83c6d18366 0 00:31:24.931 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:24.931 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:24.931 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9f4ebb3018a997721e9bec83c6d18366 00:31:24.931 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:31:24.931 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:24.931 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.6BY 00:31:24.931 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.6BY 00:31:24.931 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.6BY 00:31:24.931 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:31:24.931 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:24.931 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:24.931 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:24.931 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:31:24.931 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:31:24.931 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:24.931 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2e2b5b50cf514089077e3756356550696fdf23d53ce2c7fe4f97bc2ef4ddd325 00:31:24.931 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:31:24.931 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.O4M 00:31:24.931 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2e2b5b50cf514089077e3756356550696fdf23d53ce2c7fe4f97bc2ef4ddd325 3 00:31:24.931 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2e2b5b50cf514089077e3756356550696fdf23d53ce2c7fe4f97bc2ef4ddd325 3 00:31:24.931 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:24.931 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:24.931 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2e2b5b50cf514089077e3756356550696fdf23d53ce2c7fe4f97bc2ef4ddd325 00:31:24.931 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:31:24.931 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:24.931 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.O4M 00:31:24.931 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.O4M 00:31:24.931 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.O4M 00:31:24.931 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:31:24.931 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2549720 00:31:24.931 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2549720 ']' 00:31:24.931 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:24.931 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:24.931 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:24.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:24.931 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:24.931 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.192 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:25.192 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.eg7 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.4QM ]] 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4QM 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.7on 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.NLy ]] 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.NLy 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.3yL 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.hCE ]] 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.hCE 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.ate 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.6BY ]] 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.6BY 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.O4M 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:31:25.193 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:31:25.454 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:25.454 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:25.455 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:25.455 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:31:25.455 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:31:25.455 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:31:25.455 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:25.455 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:28.756 Waiting for block devices as requested 00:31:28.756 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:28.756 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:28.756 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:29.017 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:29.017 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:29.017 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:29.277 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:29.277 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:29.277 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:29.539 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:29.539 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:29.539 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:29.799 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:29.799 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:29.799 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:29.799 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:30.059 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:31.001 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:31:31.001 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:31.001 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:31:31.001 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:31:31.001 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:31.001 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:31:31.001 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:31:31.001 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:31:31.001 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:31.001 No valid GPT data, bailing 00:31:31.001 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:31.001 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:31:31.001 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:31:31.001 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:31:31.001 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:31:31.001 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:31.001 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:31.001 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:31.001 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:31:31.001 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:31:31.001 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:31:31.001 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:31:31.001 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:31:31.001 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:31:31.001 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:31:31.001 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:31:31.001 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:31:31.001 00:31:31.001 Discovery Log Number of Records 2, Generation counter 2 00:31:31.001 =====Discovery Log Entry 0====== 00:31:31.001 trtype: tcp 00:31:31.001 adrfam: ipv4 00:31:31.001 subtype: current discovery subsystem 00:31:31.001 treq: not specified, sq flow control disable supported 00:31:31.001 portid: 1 00:31:31.001 trsvcid: 4420 00:31:31.001 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:31.001 traddr: 10.0.0.1 00:31:31.001 eflags: none 00:31:31.001 sectype: none 00:31:31.001 =====Discovery Log Entry 1====== 00:31:31.001 trtype: tcp 00:31:31.001 adrfam: ipv4 00:31:31.001 subtype: nvme subsystem 00:31:31.001 treq: not specified, sq flow control disable supported 00:31:31.001 portid: 1 00:31:31.001 trsvcid: 4420 00:31:31.001 subnqn: nqn.2024-02.io.spdk:cnode0 00:31:31.001 traddr: 10.0.0.1 00:31:31.001 eflags: none 00:31:31.001 sectype: none 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjQwMGU5OGU5OTBiMjA2MmRjYmZlODFhMGVmYzVjMzYxN2FhNGMzZjdhNGY0ZDRh6JYrLg==: 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjQwMGU5OGU5OTBiMjA2MmRjYmZlODFhMGVmYzVjMzYxN2FhNGMzZjdhNGY0ZDRh6JYrLg==: 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: ]] 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.001 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.262 nvme0n1 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVhZjI3YmFiNzFjOTY5NjY3ZTJmYzUxOGZmYmFhNTChrzn6: 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVhZjI3YmFiNzFjOTY5NjY3ZTJmYzUxOGZmYmFhNTChrzn6: 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: ]] 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.262 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.524 nvme0n1 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjQwMGU5OGU5OTBiMjA2MmRjYmZlODFhMGVmYzVjMzYxN2FhNGMzZjdhNGY0ZDRh6JYrLg==: 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjQwMGU5OGU5OTBiMjA2MmRjYmZlODFhMGVmYzVjMzYxN2FhNGMzZjdhNGY0ZDRh6JYrLg==: 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: ]] 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.524 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.786 nvme0n1 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjU1N2I0MzNiYjc1Njg4NWU5ZTVhMzc3MjAwNThlOWPTujtz: 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjU1N2I0MzNiYjc1Njg4NWU5ZTVhMzc3MjAwNThlOWPTujtz: 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: ]] 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.786 nvme0n1 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.786 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.047 07:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.047 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:32.047 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:32.047 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.047 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.047 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.047 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:32.047 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:31:32.047 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:32.047 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:32.047 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:32.047 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:32.047 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzZmOGYyN2NmYTA4MGNmZjhlMGU5NjhkMTNkNjY0NDk1OGY4NGIyNDhkZWJmMTJil5i+4A==: 00:31:32.047 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: 00:31:32.047 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:32.047 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:32.047 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzZmOGYyN2NmYTA4MGNmZjhlMGU5NjhkMTNkNjY0NDk1OGY4NGIyNDhkZWJmMTJil5i+4A==: 00:31:32.047 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: ]] 00:31:32.047 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: 00:31:32.047 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:31:32.047 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:32.047 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:32.047 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:32.047 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:32.047 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:32.047 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:32.047 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.047 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.047 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.047 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:32.047 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:32.047 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:32.047 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:32.047 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:32.047 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:32.048 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:32.048 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:32.048 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:32.048 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:32.048 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:32.048 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:32.048 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.048 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.048 nvme0n1 00:31:32.048 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.048 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:32.048 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:32.048 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.048 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.048 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmUyYjViNTBjZjUxNDA4OTA3N2UzNzU2MzU2NTUwNjk2ZmRmMjNkNTNjZTJjN2ZlNGY5N2JjMmVmNGRkZDMyNfxgavg=: 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmUyYjViNTBjZjUxNDA4OTA3N2UzNzU2MzU2NTUwNjk2ZmRmMjNkNTNjZTJjN2ZlNGY5N2JjMmVmNGRkZDMyNfxgavg=: 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.309 nvme0n1 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVhZjI3YmFiNzFjOTY5NjY3ZTJmYzUxOGZmYmFhNTChrzn6: 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVhZjI3YmFiNzFjOTY5NjY3ZTJmYzUxOGZmYmFhNTChrzn6: 00:31:32.309 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: ]] 00:31:32.310 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: 00:31:32.310 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:31:32.310 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:32.310 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:32.310 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:32.310 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:32.310 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:32.310 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:32.310 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.310 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.570 nvme0n1 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjQwMGU5OGU5OTBiMjA2MmRjYmZlODFhMGVmYzVjMzYxN2FhNGMzZjdhNGY0ZDRh6JYrLg==: 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjQwMGU5OGU5OTBiMjA2MmRjYmZlODFhMGVmYzVjMzYxN2FhNGMzZjdhNGY0ZDRh6JYrLg==: 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: ]] 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.570 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.831 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.831 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:32.831 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:32.831 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:32.831 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:32.831 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:32.831 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:32.831 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:32.831 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:32.831 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:32.831 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:32.831 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:32.831 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:32.831 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.831 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.831 nvme0n1 00:31:32.831 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.831 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:32.831 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:32.831 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.831 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.831 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.831 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:32.831 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:32.831 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.831 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.831 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.831 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:32.831 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:31:32.831 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:32.831 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:32.831 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:32.831 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:32.831 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjU1N2I0MzNiYjc1Njg4NWU5ZTVhMzc3MjAwNThlOWPTujtz: 00:31:32.831 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: 00:31:32.831 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:32.831 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:32.831 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjU1N2I0MzNiYjc1Njg4NWU5ZTVhMzc3MjAwNThlOWPTujtz: 00:31:33.091 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: ]] 00:31:33.091 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: 00:31:33.091 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:31:33.091 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:33.091 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:33.091 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:33.091 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:33.091 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:33.091 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:33.091 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.091 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.091 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.091 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:33.091 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:33.091 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:33.091 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:33.091 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:33.091 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:33.091 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:33.091 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:33.091 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:33.091 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:33.091 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:33.091 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:33.091 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.091 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.091 nvme0n1 00:31:33.092 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.092 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:33.092 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:33.092 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.092 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.092 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.092 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:33.092 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:33.092 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.092 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.352 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzZmOGYyN2NmYTA4MGNmZjhlMGU5NjhkMTNkNjY0NDk1OGY4NGIyNDhkZWJmMTJil5i+4A==: 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzZmOGYyN2NmYTA4MGNmZjhlMGU5NjhkMTNkNjY0NDk1OGY4NGIyNDhkZWJmMTJil5i+4A==: 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: ]] 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.353 nvme0n1 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.353 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmUyYjViNTBjZjUxNDA4OTA3N2UzNzU2MzU2NTUwNjk2ZmRmMjNkNTNjZTJjN2ZlNGY5N2JjMmVmNGRkZDMyNfxgavg=: 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmUyYjViNTBjZjUxNDA4OTA3N2UzNzU2MzU2NTUwNjk2ZmRmMjNkNTNjZTJjN2ZlNGY5N2JjMmVmNGRkZDMyNfxgavg=: 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.615 nvme0n1 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:33.615 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:33.876 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:31:33.876 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:33.876 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:33.876 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:33.876 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:33.876 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVhZjI3YmFiNzFjOTY5NjY3ZTJmYzUxOGZmYmFhNTChrzn6: 00:31:33.876 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: 00:31:33.876 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:33.876 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:33.876 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVhZjI3YmFiNzFjOTY5NjY3ZTJmYzUxOGZmYmFhNTChrzn6: 00:31:33.876 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: ]] 00:31:33.876 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: 00:31:33.876 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:31:33.876 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:33.876 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:33.876 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:33.876 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:33.877 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:33.877 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:33.877 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.877 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.877 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.877 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:33.877 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:33.877 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:33.877 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:33.877 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:33.877 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:33.877 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:33.877 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:33.877 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:33.877 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:33.877 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:33.877 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:33.877 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.877 07:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.137 nvme0n1 00:31:34.137 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.137 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:34.137 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.137 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:34.137 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.137 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.137 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:34.137 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:34.137 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.137 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.137 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.137 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:34.137 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:31:34.137 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:34.137 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:34.137 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:34.137 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:34.137 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjQwMGU5OGU5OTBiMjA2MmRjYmZlODFhMGVmYzVjMzYxN2FhNGMzZjdhNGY0ZDRh6JYrLg==: 00:31:34.137 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: 00:31:34.137 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:34.137 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:34.137 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjQwMGU5OGU5OTBiMjA2MmRjYmZlODFhMGVmYzVjMzYxN2FhNGMzZjdhNGY0ZDRh6JYrLg==: 00:31:34.138 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: ]] 00:31:34.138 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: 00:31:34.138 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:31:34.138 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:34.138 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:34.138 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:34.138 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:34.138 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:34.138 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:34.138 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.138 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.138 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.138 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:34.138 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:34.138 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:34.138 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:34.138 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:34.138 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:34.138 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:34.138 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:34.138 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:34.138 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:34.138 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:34.138 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:34.138 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.138 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.399 nvme0n1 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjU1N2I0MzNiYjc1Njg4NWU5ZTVhMzc3MjAwNThlOWPTujtz: 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjU1N2I0MzNiYjc1Njg4NWU5ZTVhMzc3MjAwNThlOWPTujtz: 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: ]] 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.399 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.660 nvme0n1 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzZmOGYyN2NmYTA4MGNmZjhlMGU5NjhkMTNkNjY0NDk1OGY4NGIyNDhkZWJmMTJil5i+4A==: 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzZmOGYyN2NmYTA4MGNmZjhlMGU5NjhkMTNkNjY0NDk1OGY4NGIyNDhkZWJmMTJil5i+4A==: 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: ]] 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:34.660 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:34.920 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:34.920 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:34.920 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:34.920 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.920 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.920 nvme0n1 00:31:34.920 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmUyYjViNTBjZjUxNDA4OTA3N2UzNzU2MzU2NTUwNjk2ZmRmMjNkNTNjZTJjN2ZlNGY5N2JjMmVmNGRkZDMyNfxgavg=: 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmUyYjViNTBjZjUxNDA4OTA3N2UzNzU2MzU2NTUwNjk2ZmRmMjNkNTNjZTJjN2ZlNGY5N2JjMmVmNGRkZDMyNfxgavg=: 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.181 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.442 nvme0n1 00:31:35.442 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.442 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:35.442 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:35.442 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.442 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.442 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.442 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:35.442 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:35.442 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.442 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.442 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.442 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:35.442 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:35.443 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:31:35.443 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:35.443 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:35.443 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:35.443 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:35.443 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVhZjI3YmFiNzFjOTY5NjY3ZTJmYzUxOGZmYmFhNTChrzn6: 00:31:35.443 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: 00:31:35.443 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:35.443 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:35.443 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVhZjI3YmFiNzFjOTY5NjY3ZTJmYzUxOGZmYmFhNTChrzn6: 00:31:35.443 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: ]] 00:31:35.443 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: 00:31:35.443 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:31:35.443 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:35.443 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:35.443 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:35.443 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:35.443 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:35.443 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:35.443 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.443 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.443 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.443 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:35.443 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:35.443 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:35.443 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:35.443 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:35.443 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:35.443 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:35.443 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:35.443 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:35.443 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:35.443 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:35.443 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:35.443 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.443 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.015 nvme0n1 00:31:36.015 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.015 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:36.015 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:36.015 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.015 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.015 07:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjQwMGU5OGU5OTBiMjA2MmRjYmZlODFhMGVmYzVjMzYxN2FhNGMzZjdhNGY0ZDRh6JYrLg==: 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjQwMGU5OGU5OTBiMjA2MmRjYmZlODFhMGVmYzVjMzYxN2FhNGMzZjdhNGY0ZDRh6JYrLg==: 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: ]] 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.015 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.276 nvme0n1 00:31:36.276 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.277 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:36.277 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:36.277 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.277 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.537 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.537 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:36.537 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:36.537 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.537 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.537 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.537 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:36.537 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:31:36.537 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:36.537 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:36.538 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:36.538 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:36.538 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjU1N2I0MzNiYjc1Njg4NWU5ZTVhMzc3MjAwNThlOWPTujtz: 00:31:36.538 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: 00:31:36.538 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:36.538 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:36.538 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjU1N2I0MzNiYjc1Njg4NWU5ZTVhMzc3MjAwNThlOWPTujtz: 00:31:36.538 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: ]] 00:31:36.538 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: 00:31:36.538 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:31:36.538 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:36.538 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:36.538 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:36.538 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:36.538 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:36.538 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:36.538 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.538 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.538 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.538 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:36.538 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:36.538 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:36.538 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:36.538 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:36.538 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:36.538 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:36.538 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:36.538 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:36.538 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:36.538 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:36.538 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:36.538 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.538 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.799 nvme0n1 00:31:36.799 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.799 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:36.799 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:36.799 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.799 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.799 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.059 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:37.059 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:37.059 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.059 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.059 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.060 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:37.060 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:31:37.060 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:37.060 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:37.060 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:37.060 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:37.060 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzZmOGYyN2NmYTA4MGNmZjhlMGU5NjhkMTNkNjY0NDk1OGY4NGIyNDhkZWJmMTJil5i+4A==: 00:31:37.060 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: 00:31:37.060 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:37.060 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:37.060 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzZmOGYyN2NmYTA4MGNmZjhlMGU5NjhkMTNkNjY0NDk1OGY4NGIyNDhkZWJmMTJil5i+4A==: 00:31:37.060 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: ]] 00:31:37.060 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: 00:31:37.060 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:31:37.060 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:37.060 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:37.060 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:37.060 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:37.060 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:37.060 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:37.060 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.060 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.060 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.060 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:37.060 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:37.060 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:37.060 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:37.060 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:37.060 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:37.060 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:37.060 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:37.060 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:37.060 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:37.060 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:37.060 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:37.060 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.060 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.320 nvme0n1 00:31:37.320 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.320 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:37.320 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:37.320 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.320 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.320 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.320 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:37.320 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:37.320 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.320 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.581 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.581 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:37.581 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:31:37.581 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:37.581 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:37.581 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:37.581 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:37.581 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmUyYjViNTBjZjUxNDA4OTA3N2UzNzU2MzU2NTUwNjk2ZmRmMjNkNTNjZTJjN2ZlNGY5N2JjMmVmNGRkZDMyNfxgavg=: 00:31:37.581 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:37.581 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:37.581 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:37.581 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmUyYjViNTBjZjUxNDA4OTA3N2UzNzU2MzU2NTUwNjk2ZmRmMjNkNTNjZTJjN2ZlNGY5N2JjMmVmNGRkZDMyNfxgavg=: 00:31:37.581 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:37.582 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:31:37.582 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:37.582 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:37.582 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:37.582 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:37.582 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:37.582 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:37.582 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.582 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.582 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.582 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:37.582 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:37.582 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:37.582 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:37.582 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:37.582 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:37.582 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:37.582 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:37.582 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:37.582 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:37.582 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:37.582 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:37.582 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.582 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.844 nvme0n1 00:31:37.844 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.844 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:37.844 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:37.844 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.844 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.844 07:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVhZjI3YmFiNzFjOTY5NjY3ZTJmYzUxOGZmYmFhNTChrzn6: 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVhZjI3YmFiNzFjOTY5NjY3ZTJmYzUxOGZmYmFhNTChrzn6: 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: ]] 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:37.844 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:38.112 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:38.112 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.112 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.683 nvme0n1 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjQwMGU5OGU5OTBiMjA2MmRjYmZlODFhMGVmYzVjMzYxN2FhNGMzZjdhNGY0ZDRh6JYrLg==: 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjQwMGU5OGU5OTBiMjA2MmRjYmZlODFhMGVmYzVjMzYxN2FhNGMzZjdhNGY0ZDRh6JYrLg==: 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: ]] 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.683 07:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.628 nvme0n1 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjU1N2I0MzNiYjc1Njg4NWU5ZTVhMzc3MjAwNThlOWPTujtz: 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjU1N2I0MzNiYjc1Njg4NWU5ZTVhMzc3MjAwNThlOWPTujtz: 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: ]] 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.628 07:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.201 nvme0n1 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzZmOGYyN2NmYTA4MGNmZjhlMGU5NjhkMTNkNjY0NDk1OGY4NGIyNDhkZWJmMTJil5i+4A==: 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzZmOGYyN2NmYTA4MGNmZjhlMGU5NjhkMTNkNjY0NDk1OGY4NGIyNDhkZWJmMTJil5i+4A==: 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: ]] 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.201 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.772 nvme0n1 00:31:40.772 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.772 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.772 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.772 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.772 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.772 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.772 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.772 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.772 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.772 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.772 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.772 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.772 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:31:40.772 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.772 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:40.772 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:40.772 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:40.772 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmUyYjViNTBjZjUxNDA4OTA3N2UzNzU2MzU2NTUwNjk2ZmRmMjNkNTNjZTJjN2ZlNGY5N2JjMmVmNGRkZDMyNfxgavg=: 00:31:41.033 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:41.033 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:41.033 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:41.033 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmUyYjViNTBjZjUxNDA4OTA3N2UzNzU2MzU2NTUwNjk2ZmRmMjNkNTNjZTJjN2ZlNGY5N2JjMmVmNGRkZDMyNfxgavg=: 00:31:41.033 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:41.033 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:31:41.033 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.033 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:41.033 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:41.033 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:41.033 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.033 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:41.033 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.033 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.033 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.033 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:41.033 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:41.033 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:41.033 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:41.033 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.033 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.033 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:41.033 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:41.033 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:41.034 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:41.034 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:41.034 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:41.034 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.034 07:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.607 nvme0n1 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVhZjI3YmFiNzFjOTY5NjY3ZTJmYzUxOGZmYmFhNTChrzn6: 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVhZjI3YmFiNzFjOTY5NjY3ZTJmYzUxOGZmYmFhNTChrzn6: 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: ]] 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.607 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.867 nvme0n1 00:31:41.867 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.867 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.867 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.867 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.867 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.867 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.867 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:41.867 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:41.867 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.867 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.867 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.867 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:41.867 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:31:41.867 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:41.867 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:41.867 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:41.867 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:41.867 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjQwMGU5OGU5OTBiMjA2MmRjYmZlODFhMGVmYzVjMzYxN2FhNGMzZjdhNGY0ZDRh6JYrLg==: 00:31:41.867 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: 00:31:41.867 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:41.867 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:41.867 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjQwMGU5OGU5OTBiMjA2MmRjYmZlODFhMGVmYzVjMzYxN2FhNGMzZjdhNGY0ZDRh6JYrLg==: 00:31:41.868 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: ]] 00:31:41.868 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: 00:31:41.868 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:31:41.868 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.868 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:41.868 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:41.868 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:41.868 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.868 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:41.868 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.868 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.868 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.868 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:41.868 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:41.868 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:41.868 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:41.868 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.868 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.868 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:41.868 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:41.868 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:41.868 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:41.868 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:41.868 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:41.868 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.868 07:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.130 nvme0n1 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjU1N2I0MzNiYjc1Njg4NWU5ZTVhMzc3MjAwNThlOWPTujtz: 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjU1N2I0MzNiYjc1Njg4NWU5ZTVhMzc3MjAwNThlOWPTujtz: 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: ]] 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.130 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.419 nvme0n1 00:31:42.419 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.419 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.419 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.419 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.419 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.419 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.419 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.419 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.419 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.419 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.419 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.419 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzZmOGYyN2NmYTA4MGNmZjhlMGU5NjhkMTNkNjY0NDk1OGY4NGIyNDhkZWJmMTJil5i+4A==: 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzZmOGYyN2NmYTA4MGNmZjhlMGU5NjhkMTNkNjY0NDk1OGY4NGIyNDhkZWJmMTJil5i+4A==: 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: ]] 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.420 nvme0n1 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.420 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.769 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.769 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.769 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.769 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.769 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.769 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.769 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:31:42.769 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.769 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:42.769 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:42.769 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmUyYjViNTBjZjUxNDA4OTA3N2UzNzU2MzU2NTUwNjk2ZmRmMjNkNTNjZTJjN2ZlNGY5N2JjMmVmNGRkZDMyNfxgavg=: 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmUyYjViNTBjZjUxNDA4OTA3N2UzNzU2MzU2NTUwNjk2ZmRmMjNkNTNjZTJjN2ZlNGY5N2JjMmVmNGRkZDMyNfxgavg=: 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.770 nvme0n1 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVhZjI3YmFiNzFjOTY5NjY3ZTJmYzUxOGZmYmFhNTChrzn6: 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVhZjI3YmFiNzFjOTY5NjY3ZTJmYzUxOGZmYmFhNTChrzn6: 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: ]] 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:42.770 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:42.771 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:42.771 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.771 07:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.047 nvme0n1 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjQwMGU5OGU5OTBiMjA2MmRjYmZlODFhMGVmYzVjMzYxN2FhNGMzZjdhNGY0ZDRh6JYrLg==: 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjQwMGU5OGU5OTBiMjA2MmRjYmZlODFhMGVmYzVjMzYxN2FhNGMzZjdhNGY0ZDRh6JYrLg==: 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: ]] 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.047 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.310 nvme0n1 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjU1N2I0MzNiYjc1Njg4NWU5ZTVhMzc3MjAwNThlOWPTujtz: 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjU1N2I0MzNiYjc1Njg4NWU5ZTVhMzc3MjAwNThlOWPTujtz: 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: ]] 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.310 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.571 nvme0n1 00:31:43.571 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.571 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.571 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.571 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.571 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.571 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.571 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.571 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.571 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.571 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.571 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.571 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.571 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:31:43.571 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.571 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:43.571 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:43.571 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:43.571 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzZmOGYyN2NmYTA4MGNmZjhlMGU5NjhkMTNkNjY0NDk1OGY4NGIyNDhkZWJmMTJil5i+4A==: 00:31:43.571 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: 00:31:43.571 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:43.572 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:43.572 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzZmOGYyN2NmYTA4MGNmZjhlMGU5NjhkMTNkNjY0NDk1OGY4NGIyNDhkZWJmMTJil5i+4A==: 00:31:43.572 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: ]] 00:31:43.572 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: 00:31:43.572 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:31:43.572 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.572 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:43.572 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:43.572 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:43.572 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.572 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:43.572 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.572 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.572 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.572 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:43.572 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:43.572 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:43.572 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:43.572 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.572 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.572 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:43.572 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:43.572 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:43.572 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:43.572 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:43.572 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:43.572 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.572 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.832 nvme0n1 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmUyYjViNTBjZjUxNDA4OTA3N2UzNzU2MzU2NTUwNjk2ZmRmMjNkNTNjZTJjN2ZlNGY5N2JjMmVmNGRkZDMyNfxgavg=: 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmUyYjViNTBjZjUxNDA4OTA3N2UzNzU2MzU2NTUwNjk2ZmRmMjNkNTNjZTJjN2ZlNGY5N2JjMmVmNGRkZDMyNfxgavg=: 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.832 07:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.092 nvme0n1 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVhZjI3YmFiNzFjOTY5NjY3ZTJmYzUxOGZmYmFhNTChrzn6: 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVhZjI3YmFiNzFjOTY5NjY3ZTJmYzUxOGZmYmFhNTChrzn6: 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: ]] 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:44.092 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:44.093 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:44.093 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:44.093 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:44.093 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:44.093 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:44.093 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.093 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.353 nvme0n1 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjQwMGU5OGU5OTBiMjA2MmRjYmZlODFhMGVmYzVjMzYxN2FhNGMzZjdhNGY0ZDRh6JYrLg==: 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjQwMGU5OGU5OTBiMjA2MmRjYmZlODFhMGVmYzVjMzYxN2FhNGMzZjdhNGY0ZDRh6JYrLg==: 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: ]] 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.353 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.614 nvme0n1 00:31:44.614 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.614 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:44.614 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:44.614 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.614 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.614 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjU1N2I0MzNiYjc1Njg4NWU5ZTVhMzc3MjAwNThlOWPTujtz: 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjU1N2I0MzNiYjc1Njg4NWU5ZTVhMzc3MjAwNThlOWPTujtz: 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: ]] 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.875 07:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.136 nvme0n1 00:31:45.136 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.136 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:45.136 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:45.136 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.136 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.136 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.136 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.136 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.136 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.136 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.136 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.136 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:45.136 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:31:45.136 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:45.137 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:45.137 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:45.137 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:45.137 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzZmOGYyN2NmYTA4MGNmZjhlMGU5NjhkMTNkNjY0NDk1OGY4NGIyNDhkZWJmMTJil5i+4A==: 00:31:45.137 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: 00:31:45.137 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:45.137 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:45.137 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzZmOGYyN2NmYTA4MGNmZjhlMGU5NjhkMTNkNjY0NDk1OGY4NGIyNDhkZWJmMTJil5i+4A==: 00:31:45.137 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: ]] 00:31:45.137 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: 00:31:45.137 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:31:45.137 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:45.137 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:45.137 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:45.137 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:45.137 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:45.137 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:45.137 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.137 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.137 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.137 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:45.137 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:45.137 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:45.137 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:45.137 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.137 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.137 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:45.137 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:45.137 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:45.137 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:45.137 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:45.137 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:45.137 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.137 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.398 nvme0n1 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmUyYjViNTBjZjUxNDA4OTA3N2UzNzU2MzU2NTUwNjk2ZmRmMjNkNTNjZTJjN2ZlNGY5N2JjMmVmNGRkZDMyNfxgavg=: 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmUyYjViNTBjZjUxNDA4OTA3N2UzNzU2MzU2NTUwNjk2ZmRmMjNkNTNjZTJjN2ZlNGY5N2JjMmVmNGRkZDMyNfxgavg=: 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.398 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.658 nvme0n1 00:31:45.658 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.658 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:45.658 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:45.658 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.658 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.658 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVhZjI3YmFiNzFjOTY5NjY3ZTJmYzUxOGZmYmFhNTChrzn6: 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVhZjI3YmFiNzFjOTY5NjY3ZTJmYzUxOGZmYmFhNTChrzn6: 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: ]] 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.919 07:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.181 nvme0n1 00:31:46.181 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.181 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:46.181 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:46.181 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.181 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.181 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.181 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:46.181 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:46.181 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.181 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.442 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.442 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:46.442 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:31:46.442 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:46.442 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:46.442 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:46.442 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:46.442 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjQwMGU5OGU5OTBiMjA2MmRjYmZlODFhMGVmYzVjMzYxN2FhNGMzZjdhNGY0ZDRh6JYrLg==: 00:31:46.442 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: 00:31:46.442 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:46.442 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:46.442 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjQwMGU5OGU5OTBiMjA2MmRjYmZlODFhMGVmYzVjMzYxN2FhNGMzZjdhNGY0ZDRh6JYrLg==: 00:31:46.442 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: ]] 00:31:46.442 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: 00:31:46.442 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:31:46.442 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:46.442 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:46.442 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:46.442 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:46.442 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:46.442 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:46.442 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.442 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.442 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.442 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:46.442 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:46.442 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:46.442 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:46.442 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.442 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.442 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:46.442 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:46.442 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:46.442 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:46.442 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:46.442 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:46.442 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.442 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.703 nvme0n1 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjU1N2I0MzNiYjc1Njg4NWU5ZTVhMzc3MjAwNThlOWPTujtz: 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjU1N2I0MzNiYjc1Njg4NWU5ZTVhMzc3MjAwNThlOWPTujtz: 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: ]] 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.703 07:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.275 nvme0n1 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzZmOGYyN2NmYTA4MGNmZjhlMGU5NjhkMTNkNjY0NDk1OGY4NGIyNDhkZWJmMTJil5i+4A==: 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzZmOGYyN2NmYTA4MGNmZjhlMGU5NjhkMTNkNjY0NDk1OGY4NGIyNDhkZWJmMTJil5i+4A==: 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: ]] 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:47.275 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.276 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.846 nvme0n1 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmUyYjViNTBjZjUxNDA4OTA3N2UzNzU2MzU2NTUwNjk2ZmRmMjNkNTNjZTJjN2ZlNGY5N2JjMmVmNGRkZDMyNfxgavg=: 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmUyYjViNTBjZjUxNDA4OTA3N2UzNzU2MzU2NTUwNjk2ZmRmMjNkNTNjZTJjN2ZlNGY5N2JjMmVmNGRkZDMyNfxgavg=: 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.846 07:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.106 nvme0n1 00:31:48.106 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.106 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.106 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.106 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.106 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.366 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVhZjI3YmFiNzFjOTY5NjY3ZTJmYzUxOGZmYmFhNTChrzn6: 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVhZjI3YmFiNzFjOTY5NjY3ZTJmYzUxOGZmYmFhNTChrzn6: 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: ]] 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.367 07:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.938 nvme0n1 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjQwMGU5OGU5OTBiMjA2MmRjYmZlODFhMGVmYzVjMzYxN2FhNGMzZjdhNGY0ZDRh6JYrLg==: 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjQwMGU5OGU5OTBiMjA2MmRjYmZlODFhMGVmYzVjMzYxN2FhNGMzZjdhNGY0ZDRh6JYrLg==: 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: ]] 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.938 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.879 nvme0n1 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjU1N2I0MzNiYjc1Njg4NWU5ZTVhMzc3MjAwNThlOWPTujtz: 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjU1N2I0MzNiYjc1Njg4NWU5ZTVhMzc3MjAwNThlOWPTujtz: 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: ]] 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.879 07:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.450 nvme0n1 00:31:50.450 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.450 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.450 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.450 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.450 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.450 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.450 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.450 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.450 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.450 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.450 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.450 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.450 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:31:50.450 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.450 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:50.450 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:50.450 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:50.450 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzZmOGYyN2NmYTA4MGNmZjhlMGU5NjhkMTNkNjY0NDk1OGY4NGIyNDhkZWJmMTJil5i+4A==: 00:31:50.450 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: 00:31:50.450 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:50.450 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:50.450 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzZmOGYyN2NmYTA4MGNmZjhlMGU5NjhkMTNkNjY0NDk1OGY4NGIyNDhkZWJmMTJil5i+4A==: 00:31:50.450 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: ]] 00:31:50.450 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: 00:31:50.450 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:31:50.450 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.450 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:50.450 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:50.450 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:50.450 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.450 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:50.450 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.450 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.450 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.450 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.450 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:50.450 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:50.450 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:50.451 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.451 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.451 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:50.451 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.451 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:50.451 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:50.451 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:50.451 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:50.451 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.451 07:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.026 nvme0n1 00:31:51.026 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.026 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.026 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:51.026 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.026 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.026 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.286 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmUyYjViNTBjZjUxNDA4OTA3N2UzNzU2MzU2NTUwNjk2ZmRmMjNkNTNjZTJjN2ZlNGY5N2JjMmVmNGRkZDMyNfxgavg=: 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmUyYjViNTBjZjUxNDA4OTA3N2UzNzU2MzU2NTUwNjk2ZmRmMjNkNTNjZTJjN2ZlNGY5N2JjMmVmNGRkZDMyNfxgavg=: 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.287 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.859 nvme0n1 00:31:51.859 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.859 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.859 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:51.859 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.859 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.859 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.859 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.859 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.859 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.859 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.859 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.859 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:51.859 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:51.859 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:51.859 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:31:51.859 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:51.859 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:51.859 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:51.859 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:51.859 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVhZjI3YmFiNzFjOTY5NjY3ZTJmYzUxOGZmYmFhNTChrzn6: 00:31:51.859 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: 00:31:51.859 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:51.859 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:51.859 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVhZjI3YmFiNzFjOTY5NjY3ZTJmYzUxOGZmYmFhNTChrzn6: 00:31:51.859 07:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: ]] 00:31:51.859 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: 00:31:51.859 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:31:51.859 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:51.859 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:51.859 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:51.859 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:51.859 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:51.859 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:51.859 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.859 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.859 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.859 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:51.859 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:51.859 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:51.859 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:51.859 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.859 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.859 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:51.859 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:51.859 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:51.859 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:51.859 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:51.859 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:51.859 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.859 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.120 nvme0n1 00:31:52.120 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.120 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.120 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.120 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.120 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.120 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.120 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.120 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.120 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.120 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.120 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.120 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.120 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:31:52.120 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.120 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:52.120 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:52.120 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:52.120 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjQwMGU5OGU5OTBiMjA2MmRjYmZlODFhMGVmYzVjMzYxN2FhNGMzZjdhNGY0ZDRh6JYrLg==: 00:31:52.121 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: 00:31:52.121 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:52.121 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:52.121 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjQwMGU5OGU5OTBiMjA2MmRjYmZlODFhMGVmYzVjMzYxN2FhNGMzZjdhNGY0ZDRh6JYrLg==: 00:31:52.121 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: ]] 00:31:52.121 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: 00:31:52.121 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:31:52.121 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.121 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:52.121 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:52.121 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:52.121 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.121 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:52.121 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.121 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.121 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.121 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.121 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:52.121 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:52.121 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:52.121 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.121 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.121 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:52.121 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.121 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:52.121 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:52.121 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:52.121 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:52.121 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.121 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.382 nvme0n1 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjU1N2I0MzNiYjc1Njg4NWU5ZTVhMzc3MjAwNThlOWPTujtz: 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjU1N2I0MzNiYjc1Njg4NWU5ZTVhMzc3MjAwNThlOWPTujtz: 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: ]] 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.382 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.643 nvme0n1 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzZmOGYyN2NmYTA4MGNmZjhlMGU5NjhkMTNkNjY0NDk1OGY4NGIyNDhkZWJmMTJil5i+4A==: 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzZmOGYyN2NmYTA4MGNmZjhlMGU5NjhkMTNkNjY0NDk1OGY4NGIyNDhkZWJmMTJil5i+4A==: 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: ]] 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.643 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.904 nvme0n1 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmUyYjViNTBjZjUxNDA4OTA3N2UzNzU2MzU2NTUwNjk2ZmRmMjNkNTNjZTJjN2ZlNGY5N2JjMmVmNGRkZDMyNfxgavg=: 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmUyYjViNTBjZjUxNDA4OTA3N2UzNzU2MzU2NTUwNjk2ZmRmMjNkNTNjZTJjN2ZlNGY5N2JjMmVmNGRkZDMyNfxgavg=: 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.904 07:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.904 nvme0n1 00:31:52.904 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVhZjI3YmFiNzFjOTY5NjY3ZTJmYzUxOGZmYmFhNTChrzn6: 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVhZjI3YmFiNzFjOTY5NjY3ZTJmYzUxOGZmYmFhNTChrzn6: 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: ]] 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:53.165 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:53.166 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:53.166 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.166 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.426 nvme0n1 00:31:53.426 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.426 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.426 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.426 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:53.426 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.426 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.426 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.426 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.426 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.426 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.427 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.427 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.427 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:31:53.427 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.427 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:53.427 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:53.427 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:53.427 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjQwMGU5OGU5OTBiMjA2MmRjYmZlODFhMGVmYzVjMzYxN2FhNGMzZjdhNGY0ZDRh6JYrLg==: 00:31:53.427 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: 00:31:53.427 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:53.427 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:53.427 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjQwMGU5OGU5OTBiMjA2MmRjYmZlODFhMGVmYzVjMzYxN2FhNGMzZjdhNGY0ZDRh6JYrLg==: 00:31:53.427 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: ]] 00:31:53.427 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: 00:31:53.427 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:31:53.427 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.427 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:53.427 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:53.427 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:53.427 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.427 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:53.427 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.427 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.427 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.427 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.427 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:53.427 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:53.427 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:53.427 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.427 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.427 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:53.427 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.427 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:53.427 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:53.427 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:53.427 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:53.427 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.427 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.688 nvme0n1 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjU1N2I0MzNiYjc1Njg4NWU5ZTVhMzc3MjAwNThlOWPTujtz: 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjU1N2I0MzNiYjc1Njg4NWU5ZTVhMzc3MjAwNThlOWPTujtz: 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: ]] 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:53.688 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.689 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:53.689 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:53.689 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:53.689 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:53.689 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.689 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.949 nvme0n1 00:31:53.949 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.949 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.949 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:53.949 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.949 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.949 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.949 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzZmOGYyN2NmYTA4MGNmZjhlMGU5NjhkMTNkNjY0NDk1OGY4NGIyNDhkZWJmMTJil5i+4A==: 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzZmOGYyN2NmYTA4MGNmZjhlMGU5NjhkMTNkNjY0NDk1OGY4NGIyNDhkZWJmMTJil5i+4A==: 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: ]] 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.950 07:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.210 nvme0n1 00:31:54.210 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.210 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:54.210 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:54.210 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.210 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.210 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.210 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.210 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:54.210 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.210 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.210 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.210 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:54.210 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:31:54.210 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:54.210 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:54.210 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:54.210 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:54.210 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmUyYjViNTBjZjUxNDA4OTA3N2UzNzU2MzU2NTUwNjk2ZmRmMjNkNTNjZTJjN2ZlNGY5N2JjMmVmNGRkZDMyNfxgavg=: 00:31:54.210 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:54.211 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:54.211 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:54.211 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmUyYjViNTBjZjUxNDA4OTA3N2UzNzU2MzU2NTUwNjk2ZmRmMjNkNTNjZTJjN2ZlNGY5N2JjMmVmNGRkZDMyNfxgavg=: 00:31:54.211 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:54.211 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:31:54.211 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:54.211 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:54.211 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:54.211 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:54.211 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:54.211 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:54.211 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.211 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.211 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.211 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:54.211 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:54.211 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:54.211 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:54.211 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:54.211 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:54.211 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:54.211 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:54.211 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:54.211 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:54.211 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:54.211 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:54.211 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.211 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.473 nvme0n1 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVhZjI3YmFiNzFjOTY5NjY3ZTJmYzUxOGZmYmFhNTChrzn6: 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVhZjI3YmFiNzFjOTY5NjY3ZTJmYzUxOGZmYmFhNTChrzn6: 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: ]] 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.473 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.735 nvme0n1 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjQwMGU5OGU5OTBiMjA2MmRjYmZlODFhMGVmYzVjMzYxN2FhNGMzZjdhNGY0ZDRh6JYrLg==: 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjQwMGU5OGU5OTBiMjA2MmRjYmZlODFhMGVmYzVjMzYxN2FhNGMzZjdhNGY0ZDRh6JYrLg==: 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: ]] 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.735 07:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.997 nvme0n1 00:31:54.997 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.997 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:54.997 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:54.997 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.997 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.997 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.997 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.997 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:54.997 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.997 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.997 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.997 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:54.997 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:31:54.997 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:54.997 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:54.997 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:54.997 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:54.997 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjU1N2I0MzNiYjc1Njg4NWU5ZTVhMzc3MjAwNThlOWPTujtz: 00:31:54.997 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: 00:31:54.997 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:54.997 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:54.997 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjU1N2I0MzNiYjc1Njg4NWU5ZTVhMzc3MjAwNThlOWPTujtz: 00:31:54.997 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: ]] 00:31:54.997 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: 00:31:54.997 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:31:54.997 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:54.997 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:54.997 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:54.997 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:54.997 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:54.997 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:54.997 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.997 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.997 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.258 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.258 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:55.258 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:55.258 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:55.258 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.258 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.258 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:55.258 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.258 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:55.258 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:55.258 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:55.258 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:55.258 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.258 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.520 nvme0n1 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzZmOGYyN2NmYTA4MGNmZjhlMGU5NjhkMTNkNjY0NDk1OGY4NGIyNDhkZWJmMTJil5i+4A==: 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzZmOGYyN2NmYTA4MGNmZjhlMGU5NjhkMTNkNjY0NDk1OGY4NGIyNDhkZWJmMTJil5i+4A==: 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: ]] 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.520 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.782 nvme0n1 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmUyYjViNTBjZjUxNDA4OTA3N2UzNzU2MzU2NTUwNjk2ZmRmMjNkNTNjZTJjN2ZlNGY5N2JjMmVmNGRkZDMyNfxgavg=: 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmUyYjViNTBjZjUxNDA4OTA3N2UzNzU2MzU2NTUwNjk2ZmRmMjNkNTNjZTJjN2ZlNGY5N2JjMmVmNGRkZDMyNfxgavg=: 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.782 07:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.044 nvme0n1 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVhZjI3YmFiNzFjOTY5NjY3ZTJmYzUxOGZmYmFhNTChrzn6: 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVhZjI3YmFiNzFjOTY5NjY3ZTJmYzUxOGZmYmFhNTChrzn6: 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: ]] 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.044 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.617 nvme0n1 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjQwMGU5OGU5OTBiMjA2MmRjYmZlODFhMGVmYzVjMzYxN2FhNGMzZjdhNGY0ZDRh6JYrLg==: 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjQwMGU5OGU5OTBiMjA2MmRjYmZlODFhMGVmYzVjMzYxN2FhNGMzZjdhNGY0ZDRh6JYrLg==: 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: ]] 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.617 07:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.187 nvme0n1 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjU1N2I0MzNiYjc1Njg4NWU5ZTVhMzc3MjAwNThlOWPTujtz: 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjU1N2I0MzNiYjc1Njg4NWU5ZTVhMzc3MjAwNThlOWPTujtz: 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: ]] 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.187 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.449 nvme0n1 00:31:57.449 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.449 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.449 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.449 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:57.449 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.710 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.710 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.710 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:57.710 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.710 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.710 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.710 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:57.710 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:31:57.710 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:57.710 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:57.710 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:57.710 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:57.710 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzZmOGYyN2NmYTA4MGNmZjhlMGU5NjhkMTNkNjY0NDk1OGY4NGIyNDhkZWJmMTJil5i+4A==: 00:31:57.710 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: 00:31:57.710 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:57.710 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:57.710 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzZmOGYyN2NmYTA4MGNmZjhlMGU5NjhkMTNkNjY0NDk1OGY4NGIyNDhkZWJmMTJil5i+4A==: 00:31:57.710 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: ]] 00:31:57.710 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: 00:31:57.710 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:31:57.710 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:57.710 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:57.710 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:57.710 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:57.710 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:57.710 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:57.710 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.710 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.710 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.710 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:57.710 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:57.710 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:57.710 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:57.711 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.711 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.711 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:57.711 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.711 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:57.711 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:57.711 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:57.711 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:57.711 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.711 07:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.971 nvme0n1 00:31:57.971 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.971 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.971 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:57.972 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.972 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.972 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmUyYjViNTBjZjUxNDA4OTA3N2UzNzU2MzU2NTUwNjk2ZmRmMjNkNTNjZTJjN2ZlNGY5N2JjMmVmNGRkZDMyNfxgavg=: 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmUyYjViNTBjZjUxNDA4OTA3N2UzNzU2MzU2NTUwNjk2ZmRmMjNkNTNjZTJjN2ZlNGY5N2JjMmVmNGRkZDMyNfxgavg=: 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.234 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.495 nvme0n1 00:31:58.495 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.495 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.495 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:58.495 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.495 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.495 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.495 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.495 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:58.495 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.495 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVhZjI3YmFiNzFjOTY5NjY3ZTJmYzUxOGZmYmFhNTChrzn6: 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVhZjI3YmFiNzFjOTY5NjY3ZTJmYzUxOGZmYmFhNTChrzn6: 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: ]] 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjODUzYzE0NWE1YmY3NTE5YWQwMThjMmU1NTBlYmMyMjIyNTI4NjM0NjQzZmIyOGU5ZjQwN2QwNzc2OThhMJfLkSc=: 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.755 07:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.328 nvme0n1 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjQwMGU5OGU5OTBiMjA2MmRjYmZlODFhMGVmYzVjMzYxN2FhNGMzZjdhNGY0ZDRh6JYrLg==: 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjQwMGU5OGU5OTBiMjA2MmRjYmZlODFhMGVmYzVjMzYxN2FhNGMzZjdhNGY0ZDRh6JYrLg==: 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: ]] 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.328 07:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.899 nvme0n1 00:31:59.899 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.899 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.899 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:59.899 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.899 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.899 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjU1N2I0MzNiYjc1Njg4NWU5ZTVhMzc3MjAwNThlOWPTujtz: 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjU1N2I0MzNiYjc1Njg4NWU5ZTVhMzc3MjAwNThlOWPTujtz: 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: ]] 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.161 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.733 nvme0n1 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzZmOGYyN2NmYTA4MGNmZjhlMGU5NjhkMTNkNjY0NDk1OGY4NGIyNDhkZWJmMTJil5i+4A==: 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzZmOGYyN2NmYTA4MGNmZjhlMGU5NjhkMTNkNjY0NDk1OGY4NGIyNDhkZWJmMTJil5i+4A==: 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: ]] 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWY0ZWJiMzAxOGE5OTc3MjFlOWJlYzgzYzZkMTgzNjYvYBr/: 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.733 07:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.675 nvme0n1 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmUyYjViNTBjZjUxNDA4OTA3N2UzNzU2MzU2NTUwNjk2ZmRmMjNkNTNjZTJjN2ZlNGY5N2JjMmVmNGRkZDMyNfxgavg=: 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmUyYjViNTBjZjUxNDA4OTA3N2UzNzU2MzU2NTUwNjk2ZmRmMjNkNTNjZTJjN2ZlNGY5N2JjMmVmNGRkZDMyNfxgavg=: 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:01.675 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:01.676 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.676 07:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.248 nvme0n1 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjQwMGU5OGU5OTBiMjA2MmRjYmZlODFhMGVmYzVjMzYxN2FhNGMzZjdhNGY0ZDRh6JYrLg==: 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjQwMGU5OGU5OTBiMjA2MmRjYmZlODFhMGVmYzVjMzYxN2FhNGMzZjdhNGY0ZDRh6JYrLg==: 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: ]] 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.248 request: 00:32:02.248 { 00:32:02.248 "name": "nvme0", 00:32:02.248 "trtype": "tcp", 00:32:02.248 "traddr": "10.0.0.1", 00:32:02.248 "adrfam": "ipv4", 00:32:02.248 "trsvcid": "4420", 00:32:02.248 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:02.248 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:02.248 "prchk_reftag": false, 00:32:02.248 "prchk_guard": false, 00:32:02.248 "hdgst": false, 00:32:02.248 "ddgst": false, 00:32:02.248 "allow_unrecognized_csi": false, 00:32:02.248 "method": "bdev_nvme_attach_controller", 00:32:02.248 "req_id": 1 00:32:02.248 } 00:32:02.248 Got JSON-RPC error response 00:32:02.248 response: 00:32:02.248 { 00:32:02.248 "code": -5, 00:32:02.248 "message": "Input/output error" 00:32:02.248 } 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:02.248 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.249 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.510 request: 00:32:02.510 { 00:32:02.510 "name": "nvme0", 00:32:02.510 "trtype": "tcp", 00:32:02.510 "traddr": "10.0.0.1", 00:32:02.510 "adrfam": "ipv4", 00:32:02.510 "trsvcid": "4420", 00:32:02.510 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:02.510 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:02.510 "prchk_reftag": false, 00:32:02.510 "prchk_guard": false, 00:32:02.510 "hdgst": false, 00:32:02.510 "ddgst": false, 00:32:02.510 "dhchap_key": "key2", 00:32:02.510 "allow_unrecognized_csi": false, 00:32:02.510 "method": "bdev_nvme_attach_controller", 00:32:02.510 "req_id": 1 00:32:02.510 } 00:32:02.510 Got JSON-RPC error response 00:32:02.510 response: 00:32:02.510 { 00:32:02.510 "code": -5, 00:32:02.510 "message": "Input/output error" 00:32:02.510 } 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.510 request: 00:32:02.510 { 00:32:02.510 "name": "nvme0", 00:32:02.510 "trtype": "tcp", 00:32:02.510 "traddr": "10.0.0.1", 00:32:02.510 "adrfam": "ipv4", 00:32:02.510 "trsvcid": "4420", 00:32:02.510 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:02.510 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:02.510 "prchk_reftag": false, 00:32:02.510 "prchk_guard": false, 00:32:02.510 "hdgst": false, 00:32:02.510 "ddgst": false, 00:32:02.510 "dhchap_key": "key1", 00:32:02.510 "dhchap_ctrlr_key": "ckey2", 00:32:02.510 "allow_unrecognized_csi": false, 00:32:02.510 "method": "bdev_nvme_attach_controller", 00:32:02.510 "req_id": 1 00:32:02.510 } 00:32:02.510 Got JSON-RPC error response 00:32:02.510 response: 00:32:02.510 { 00:32:02.510 "code": -5, 00:32:02.510 "message": "Input/output error" 00:32:02.510 } 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.510 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.777 nvme0n1 00:32:02.777 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.777 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:02.777 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:02.777 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:02.777 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:02.777 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:02.777 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjU1N2I0MzNiYjc1Njg4NWU5ZTVhMzc3MjAwNThlOWPTujtz: 00:32:02.777 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: 00:32:02.777 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:02.777 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:02.777 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjU1N2I0MzNiYjc1Njg4NWU5ZTVhMzc3MjAwNThlOWPTujtz: 00:32:02.777 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: ]] 00:32:02.777 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: 00:32:02.777 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:02.777 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.777 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.777 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.777 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:32:02.777 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:32:02.777 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.777 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.777 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.777 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:02.777 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:02.777 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:32:02.777 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:02.777 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:02.777 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:02.777 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:02.777 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:02.777 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:02.777 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.777 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.777 request: 00:32:02.777 { 00:32:02.777 "name": "nvme0", 00:32:02.777 "dhchap_key": "key1", 00:32:02.777 "dhchap_ctrlr_key": "ckey2", 00:32:02.777 "method": "bdev_nvme_set_keys", 00:32:02.777 "req_id": 1 00:32:02.777 } 00:32:02.777 Got JSON-RPC error response 00:32:02.777 response: 00:32:02.777 { 00:32:02.777 "code": -13, 00:32:02.777 "message": "Permission denied" 00:32:02.777 } 00:32:02.777 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:03.041 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:32:03.041 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:03.041 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:03.041 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:03.041 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.041 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:32:03.041 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.041 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.041 07:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.041 07:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:32:03.041 07:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:32:03.985 07:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.985 07:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:32:03.985 07:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.985 07:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.985 07:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.985 07:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:32:03.985 07:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:32:04.926 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:32:04.926 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:32:04.926 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.926 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.926 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjQwMGU5OGU5OTBiMjA2MmRjYmZlODFhMGVmYzVjMzYxN2FhNGMzZjdhNGY0ZDRh6JYrLg==: 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjQwMGU5OGU5OTBiMjA2MmRjYmZlODFhMGVmYzVjMzYxN2FhNGMzZjdhNGY0ZDRh6JYrLg==: 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: ]] 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQ0ZWI1OTFmYjQ0MTJmOWU1NDI1ZjdiMTJhMzg5MDdjOTdlNGU1ODMxMmMwNWU3/e+f9Q==: 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.187 nvme0n1 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjU1N2I0MzNiYjc1Njg4NWU5ZTVhMzc3MjAwNThlOWPTujtz: 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjU1N2I0MzNiYjc1Njg4NWU5ZTVhMzc3MjAwNThlOWPTujtz: 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: ]] 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDI1NDkyYThjMzI2ZTg5N2FjYTRkM2Q4YmZjOTU2NTnQNBsD: 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.187 request: 00:32:05.187 { 00:32:05.187 "name": "nvme0", 00:32:05.187 "dhchap_key": "key2", 00:32:05.187 "dhchap_ctrlr_key": "ckey1", 00:32:05.187 "method": "bdev_nvme_set_keys", 00:32:05.187 "req_id": 1 00:32:05.187 } 00:32:05.187 Got JSON-RPC error response 00:32:05.187 response: 00:32:05.187 { 00:32:05.187 "code": -13, 00:32:05.187 "message": "Permission denied" 00:32:05.187 } 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.187 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.447 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.447 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:32:05.447 07:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:32:06.389 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:32:06.389 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:32:06.389 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.389 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.389 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.389 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:32:06.389 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:32:06.389 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:32:06.389 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:32:06.389 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:06.389 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:32:06.389 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:06.389 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:32:06.389 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:06.389 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:06.389 rmmod nvme_tcp 00:32:06.389 rmmod nvme_fabrics 00:32:06.389 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:06.389 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:32:06.389 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:32:06.389 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2549720 ']' 00:32:06.389 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2549720 00:32:06.389 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2549720 ']' 00:32:06.389 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2549720 00:32:06.389 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:32:06.389 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:06.389 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2549720 00:32:06.389 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:06.650 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:06.650 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2549720' 00:32:06.651 killing process with pid 2549720 00:32:06.651 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2549720 00:32:06.651 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2549720 00:32:06.651 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:06.651 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:06.651 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:06.651 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:32:06.651 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:32:06.651 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:06.651 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:32:06.651 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:06.651 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:06.651 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:06.651 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:06.651 07:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:08.566 07:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:08.567 07:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:08.567 07:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:08.827 07:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:32:08.827 07:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:32:08.827 07:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:32:08.827 07:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:08.827 07:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:08.827 07:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:08.827 07:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:08.827 07:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:32:08.827 07:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:32:08.827 07:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:12.125 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:12.125 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:12.125 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:12.385 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:12.385 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:12.385 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:12.385 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:12.385 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:12.385 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:12.385 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:12.385 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:12.385 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:12.385 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:12.385 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:12.385 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:12.385 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:12.385 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:12.955 07:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.eg7 /tmp/spdk.key-null.7on /tmp/spdk.key-sha256.3yL /tmp/spdk.key-sha384.ate /tmp/spdk.key-sha512.O4M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:32:12.955 07:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:16.258 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:32:16.258 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:32:16.258 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:32:16.258 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:32:16.258 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:32:16.258 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:32:16.258 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:32:16.258 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:32:16.258 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:32:16.258 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:32:16.258 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:32:16.258 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:32:16.258 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:32:16.258 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:32:16.258 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:32:16.258 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:32:16.258 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:32:16.831 00:32:16.831 real 1m0.982s 00:32:16.831 user 0m54.881s 00:32:16.831 sys 0m16.022s 00:32:16.831 07:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:16.831 07:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.831 ************************************ 00:32:16.831 END TEST nvmf_auth_host 00:32:16.831 ************************************ 00:32:16.831 07:27:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:32:16.831 07:27:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:16.831 07:27:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:16.831 07:27:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:16.831 07:27:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.831 ************************************ 00:32:16.831 START TEST nvmf_digest 00:32:16.831 ************************************ 00:32:16.831 07:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:16.831 * Looking for test storage... 00:32:16.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:16.831 07:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:16.831 07:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:32:16.831 07:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:16.831 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:16.831 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:16.831 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:16.831 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:16.831 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:32:16.831 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:32:16.831 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:32:16.831 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:32:16.831 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:32:16.831 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:32:16.831 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:32:16.831 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:16.831 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:32:16.831 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:32:16.831 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:16.831 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:16.831 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:32:16.831 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:32:16.831 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:16.831 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:32:16.831 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:32:16.831 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:32:16.831 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:32:16.831 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:16.831 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:32:16.831 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:32:16.831 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:16.831 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:16.831 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:32:16.831 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:16.831 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:16.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:16.831 --rc genhtml_branch_coverage=1 00:32:16.831 --rc genhtml_function_coverage=1 00:32:16.831 --rc genhtml_legend=1 00:32:16.831 --rc geninfo_all_blocks=1 00:32:16.831 --rc geninfo_unexecuted_blocks=1 00:32:16.831 00:32:16.831 ' 00:32:16.831 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:16.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:16.831 --rc genhtml_branch_coverage=1 00:32:16.831 --rc genhtml_function_coverage=1 00:32:16.831 --rc genhtml_legend=1 00:32:16.831 --rc geninfo_all_blocks=1 00:32:16.831 --rc geninfo_unexecuted_blocks=1 00:32:16.831 00:32:16.831 ' 00:32:16.831 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:16.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:16.831 --rc genhtml_branch_coverage=1 00:32:16.831 --rc genhtml_function_coverage=1 00:32:16.831 --rc genhtml_legend=1 00:32:16.831 --rc geninfo_all_blocks=1 00:32:16.831 --rc geninfo_unexecuted_blocks=1 00:32:16.831 00:32:16.831 ' 00:32:16.831 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:16.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:16.831 --rc genhtml_branch_coverage=1 00:32:16.831 --rc genhtml_function_coverage=1 00:32:16.831 --rc genhtml_legend=1 00:32:16.831 --rc geninfo_all_blocks=1 00:32:16.831 --rc geninfo_unexecuted_blocks=1 00:32:16.831 00:32:16.831 ' 00:32:16.831 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:16.831 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:17.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:32:17.094 07:27:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:25.245 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:25.245 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:25.245 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:25.245 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:25.245 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:25.246 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:25.246 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:32:25.246 00:32:25.246 --- 10.0.0.2 ping statistics --- 00:32:25.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:25.246 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:25.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:25.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:32:25.246 00:32:25.246 --- 10.0.0.1 ping statistics --- 00:32:25.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:25.246 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:25.246 ************************************ 00:32:25.246 START TEST nvmf_digest_clean 00:32:25.246 ************************************ 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2567410 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2567410 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2567410 ']' 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:25.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:25.246 07:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:25.246 [2024-11-27 07:27:35.701499] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:32:25.246 [2024-11-27 07:27:35.701566] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:25.246 [2024-11-27 07:27:35.801518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:25.246 [2024-11-27 07:27:35.851973] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:25.246 [2024-11-27 07:27:35.852025] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:25.246 [2024-11-27 07:27:35.852033] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:25.246 [2024-11-27 07:27:35.852040] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:25.246 [2024-11-27 07:27:35.852048] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:25.246 [2024-11-27 07:27:35.852846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:25.509 07:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:25.509 07:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:32:25.509 07:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:25.509 07:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:25.509 07:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:25.509 07:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:25.509 07:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:32:25.509 07:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:32:25.509 07:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:32:25.509 07:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.509 07:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:25.509 null0 00:32:25.509 [2024-11-27 07:27:36.652174] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:25.509 [2024-11-27 07:27:36.676474] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:25.509 07:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.509 07:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:32:25.509 07:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:25.509 07:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:25.509 07:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:25.509 07:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:25.509 07:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:25.509 07:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:25.509 07:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2567576 00:32:25.509 07:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2567576 /var/tmp/bperf.sock 00:32:25.509 07:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2567576 ']' 00:32:25.509 07:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:25.509 07:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:25.509 07:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:25.509 07:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:25.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:25.509 07:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:25.509 07:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:25.771 [2024-11-27 07:27:36.746806] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:32:25.771 [2024-11-27 07:27:36.746874] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2567576 ] 00:32:25.771 [2024-11-27 07:27:36.839529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:25.771 [2024-11-27 07:27:36.891349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:26.343 07:27:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:26.343 07:27:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:32:26.343 07:27:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:26.343 07:27:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:26.344 07:27:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:26.605 07:27:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:26.605 07:27:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:27.178 nvme0n1 00:32:27.178 07:27:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:27.178 07:27:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:27.178 Running I/O for 2 seconds... 00:32:29.595 18931.00 IOPS, 73.95 MiB/s [2024-11-27T06:27:40.800Z] 21277.00 IOPS, 83.11 MiB/s 00:32:29.595 Latency(us) 00:32:29.595 [2024-11-27T06:27:40.800Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:29.595 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:29.595 nvme0n1 : 2.00 21291.56 83.17 0.00 0.00 6004.62 2048.00 23702.19 00:32:29.595 [2024-11-27T06:27:40.800Z] =================================================================================================================== 00:32:29.595 [2024-11-27T06:27:40.800Z] Total : 21291.56 83.17 0.00 0.00 6004.62 2048.00 23702.19 00:32:29.595 { 00:32:29.595 "results": [ 00:32:29.595 { 00:32:29.595 "job": "nvme0n1", 00:32:29.595 "core_mask": "0x2", 00:32:29.595 "workload": "randread", 00:32:29.595 "status": "finished", 00:32:29.595 "queue_depth": 128, 00:32:29.595 "io_size": 4096, 00:32:29.595 "runtime": 2.003846, 00:32:29.595 "iops": 21291.556337163634, 00:32:29.595 "mibps": 83.17014194204545, 00:32:29.595 "io_failed": 0, 00:32:29.595 "io_timeout": 0, 00:32:29.595 "avg_latency_us": 6004.617915387319, 00:32:29.595 "min_latency_us": 2048.0, 00:32:29.595 "max_latency_us": 23702.18666666667 00:32:29.595 } 00:32:29.595 ], 00:32:29.595 "core_count": 1 00:32:29.595 } 00:32:29.595 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:29.595 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:29.595 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:29.595 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:29.595 | select(.opcode=="crc32c") 00:32:29.595 | "\(.module_name) \(.executed)"' 00:32:29.595 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:29.595 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:29.595 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:29.595 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:29.595 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:29.595 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2567576 00:32:29.596 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2567576 ']' 00:32:29.596 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2567576 00:32:29.596 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:32:29.596 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:29.596 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2567576 00:32:29.596 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:29.596 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:29.596 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2567576' 00:32:29.596 killing process with pid 2567576 00:32:29.596 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2567576 00:32:29.596 Received shutdown signal, test time was about 2.000000 seconds 00:32:29.596 00:32:29.596 Latency(us) 00:32:29.596 [2024-11-27T06:27:40.801Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:29.596 [2024-11-27T06:27:40.801Z] =================================================================================================================== 00:32:29.596 [2024-11-27T06:27:40.801Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:29.596 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2567576 00:32:29.596 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:32:29.596 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:29.596 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:29.596 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:29.596 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:29.596 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:29.596 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:29.596 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2568287 00:32:29.596 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2568287 /var/tmp/bperf.sock 00:32:29.596 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2568287 ']' 00:32:29.596 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:29.596 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:29.596 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:29.596 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:29.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:29.596 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:29.596 07:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:29.596 [2024-11-27 07:27:40.751941] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:32:29.596 [2024-11-27 07:27:40.752000] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2568287 ] 00:32:29.596 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:29.596 Zero copy mechanism will not be used. 00:32:29.912 [2024-11-27 07:27:40.834280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:29.912 [2024-11-27 07:27:40.864153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:30.490 07:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:30.491 07:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:32:30.491 07:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:30.491 07:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:30.491 07:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:30.751 07:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:30.751 07:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:31.011 nvme0n1 00:32:31.012 07:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:31.012 07:27:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:31.012 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:31.012 Zero copy mechanism will not be used. 00:32:31.012 Running I/O for 2 seconds... 00:32:33.339 7117.00 IOPS, 889.62 MiB/s [2024-11-27T06:27:44.544Z] 6985.50 IOPS, 873.19 MiB/s 00:32:33.339 Latency(us) 00:32:33.339 [2024-11-27T06:27:44.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:33.339 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:33.339 nvme0n1 : 2.00 6984.65 873.08 0.00 0.00 2288.16 604.16 11414.19 00:32:33.339 [2024-11-27T06:27:44.544Z] =================================================================================================================== 00:32:33.339 [2024-11-27T06:27:44.544Z] Total : 6984.65 873.08 0.00 0.00 2288.16 604.16 11414.19 00:32:33.339 { 00:32:33.339 "results": [ 00:32:33.339 { 00:32:33.339 "job": "nvme0n1", 00:32:33.339 "core_mask": "0x2", 00:32:33.339 "workload": "randread", 00:32:33.339 "status": "finished", 00:32:33.339 "queue_depth": 16, 00:32:33.339 "io_size": 131072, 00:32:33.339 "runtime": 2.002535, 00:32:33.339 "iops": 6984.646959978228, 00:32:33.339 "mibps": 873.0808699972785, 00:32:33.339 "io_failed": 0, 00:32:33.339 "io_timeout": 0, 00:32:33.339 "avg_latency_us": 2288.1646176211248, 00:32:33.339 "min_latency_us": 604.16, 00:32:33.339 "max_latency_us": 11414.186666666666 00:32:33.339 } 00:32:33.339 ], 00:32:33.339 "core_count": 1 00:32:33.339 } 00:32:33.339 07:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:33.339 07:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:33.339 07:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:33.339 07:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:33.339 | select(.opcode=="crc32c") 00:32:33.339 | "\(.module_name) \(.executed)"' 00:32:33.339 07:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:33.339 07:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:33.339 07:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:33.339 07:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:33.340 07:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:33.340 07:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2568287 00:32:33.340 07:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2568287 ']' 00:32:33.340 07:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2568287 00:32:33.340 07:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:32:33.340 07:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:33.340 07:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2568287 00:32:33.340 07:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:33.340 07:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:33.340 07:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2568287' 00:32:33.340 killing process with pid 2568287 00:32:33.340 07:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2568287 00:32:33.340 Received shutdown signal, test time was about 2.000000 seconds 00:32:33.340 00:32:33.340 Latency(us) 00:32:33.340 [2024-11-27T06:27:44.545Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:33.340 [2024-11-27T06:27:44.545Z] =================================================================================================================== 00:32:33.340 [2024-11-27T06:27:44.545Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:33.340 07:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2568287 00:32:33.340 07:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:32:33.340 07:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:33.340 07:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:33.340 07:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:33.340 07:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:33.340 07:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:33.340 07:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:33.340 07:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2569122 00:32:33.340 07:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2569122 /var/tmp/bperf.sock 00:32:33.340 07:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2569122 ']' 00:32:33.340 07:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:33.340 07:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:33.340 07:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:33.340 07:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:33.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:33.340 07:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:33.340 07:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:33.601 [2024-11-27 07:27:44.572129] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:32:33.601 [2024-11-27 07:27:44.572192] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2569122 ] 00:32:33.601 [2024-11-27 07:27:44.653629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:33.601 [2024-11-27 07:27:44.683610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:34.172 07:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:34.172 07:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:32:34.172 07:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:34.172 07:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:34.172 07:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:34.432 07:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:34.432 07:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:35.003 nvme0n1 00:32:35.003 07:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:35.003 07:27:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:35.003 Running I/O for 2 seconds... 00:32:37.330 29711.00 IOPS, 116.06 MiB/s [2024-11-27T06:27:48.535Z] 29891.50 IOPS, 116.76 MiB/s 00:32:37.330 Latency(us) 00:32:37.330 [2024-11-27T06:27:48.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:37.330 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:37.330 nvme0n1 : 2.00 29894.92 116.78 0.00 0.00 4274.88 3167.57 14199.47 00:32:37.330 [2024-11-27T06:27:48.535Z] =================================================================================================================== 00:32:37.330 [2024-11-27T06:27:48.535Z] Total : 29894.92 116.78 0.00 0.00 4274.88 3167.57 14199.47 00:32:37.330 { 00:32:37.330 "results": [ 00:32:37.330 { 00:32:37.330 "job": "nvme0n1", 00:32:37.330 "core_mask": "0x2", 00:32:37.330 "workload": "randwrite", 00:32:37.330 "status": "finished", 00:32:37.330 "queue_depth": 128, 00:32:37.330 "io_size": 4096, 00:32:37.330 "runtime": 2.004053, 00:32:37.330 "iops": 29894.917948776805, 00:32:37.330 "mibps": 116.7770232374094, 00:32:37.330 "io_failed": 0, 00:32:37.330 "io_timeout": 0, 00:32:37.330 "avg_latency_us": 4274.878014833113, 00:32:37.330 "min_latency_us": 3167.5733333333333, 00:32:37.330 "max_latency_us": 14199.466666666667 00:32:37.330 } 00:32:37.330 ], 00:32:37.330 "core_count": 1 00:32:37.330 } 00:32:37.330 07:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:37.330 07:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:37.330 07:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:37.330 07:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:37.330 | select(.opcode=="crc32c") 00:32:37.330 | "\(.module_name) \(.executed)"' 00:32:37.330 07:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:37.330 07:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:37.330 07:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:37.330 07:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:37.330 07:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:37.330 07:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2569122 00:32:37.330 07:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2569122 ']' 00:32:37.330 07:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2569122 00:32:37.330 07:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:32:37.330 07:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:37.330 07:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2569122 00:32:37.330 07:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:37.330 07:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:37.330 07:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2569122' 00:32:37.330 killing process with pid 2569122 00:32:37.330 07:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2569122 00:32:37.330 Received shutdown signal, test time was about 2.000000 seconds 00:32:37.330 00:32:37.330 Latency(us) 00:32:37.330 [2024-11-27T06:27:48.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:37.330 [2024-11-27T06:27:48.535Z] =================================================================================================================== 00:32:37.330 [2024-11-27T06:27:48.535Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:37.330 07:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2569122 00:32:37.330 07:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:32:37.330 07:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:37.330 07:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:37.330 07:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:37.330 07:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:37.330 07:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:37.330 07:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:37.330 07:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2569958 00:32:37.330 07:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2569958 /var/tmp/bperf.sock 00:32:37.330 07:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2569958 ']' 00:32:37.330 07:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:37.330 07:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:37.330 07:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:37.330 07:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:37.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:37.330 07:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:37.330 07:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:37.330 [2024-11-27 07:27:48.510753] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:32:37.330 [2024-11-27 07:27:48.510810] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2569958 ] 00:32:37.330 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:37.330 Zero copy mechanism will not be used. 00:32:37.592 [2024-11-27 07:27:48.595394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:37.592 [2024-11-27 07:27:48.624789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:38.163 07:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:38.164 07:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:32:38.164 07:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:38.164 07:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:38.164 07:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:38.424 07:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:38.424 07:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:38.685 nvme0n1 00:32:38.685 07:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:38.685 07:27:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:38.685 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:38.685 Zero copy mechanism will not be used. 00:32:38.685 Running I/O for 2 seconds... 00:32:41.010 4258.00 IOPS, 532.25 MiB/s [2024-11-27T06:27:52.215Z] 5336.50 IOPS, 667.06 MiB/s 00:32:41.010 Latency(us) 00:32:41.010 [2024-11-27T06:27:52.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:41.010 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:41.010 nvme0n1 : 2.00 5337.67 667.21 0.00 0.00 2993.93 1194.67 13981.01 00:32:41.010 [2024-11-27T06:27:52.215Z] =================================================================================================================== 00:32:41.010 [2024-11-27T06:27:52.215Z] Total : 5337.67 667.21 0.00 0.00 2993.93 1194.67 13981.01 00:32:41.010 { 00:32:41.010 "results": [ 00:32:41.010 { 00:32:41.010 "job": "nvme0n1", 00:32:41.010 "core_mask": "0x2", 00:32:41.010 "workload": "randwrite", 00:32:41.010 "status": "finished", 00:32:41.010 "queue_depth": 16, 00:32:41.010 "io_size": 131072, 00:32:41.010 "runtime": 2.003121, 00:32:41.010 "iops": 5337.670565083187, 00:32:41.010 "mibps": 667.2088206353984, 00:32:41.010 "io_failed": 0, 00:32:41.010 "io_timeout": 0, 00:32:41.010 "avg_latency_us": 2993.9290983913206, 00:32:41.010 "min_latency_us": 1194.6666666666667, 00:32:41.010 "max_latency_us": 13981.013333333334 00:32:41.010 } 00:32:41.010 ], 00:32:41.010 "core_count": 1 00:32:41.010 } 00:32:41.010 07:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:41.010 07:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:41.010 07:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:41.010 07:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:41.010 | select(.opcode=="crc32c") 00:32:41.010 | "\(.module_name) \(.executed)"' 00:32:41.010 07:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:41.010 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:41.010 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:41.010 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:41.010 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:41.010 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2569958 00:32:41.010 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2569958 ']' 00:32:41.010 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2569958 00:32:41.010 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:32:41.010 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:41.010 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2569958 00:32:41.010 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:41.010 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:41.010 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2569958' 00:32:41.010 killing process with pid 2569958 00:32:41.010 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2569958 00:32:41.010 Received shutdown signal, test time was about 2.000000 seconds 00:32:41.010 00:32:41.010 Latency(us) 00:32:41.010 [2024-11-27T06:27:52.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:41.010 [2024-11-27T06:27:52.215Z] =================================================================================================================== 00:32:41.010 [2024-11-27T06:27:52.215Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:41.011 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2569958 00:32:41.272 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2567410 00:32:41.272 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2567410 ']' 00:32:41.272 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2567410 00:32:41.272 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:32:41.272 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:41.272 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2567410 00:32:41.273 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:41.273 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:41.273 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2567410' 00:32:41.273 killing process with pid 2567410 00:32:41.273 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2567410 00:32:41.273 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2567410 00:32:41.273 00:32:41.273 real 0m16.779s 00:32:41.273 user 0m32.978s 00:32:41.273 sys 0m3.789s 00:32:41.273 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:41.273 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:41.273 ************************************ 00:32:41.273 END TEST nvmf_digest_clean 00:32:41.273 ************************************ 00:32:41.273 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:32:41.273 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:41.273 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:41.273 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:41.535 ************************************ 00:32:41.535 START TEST nvmf_digest_error 00:32:41.535 ************************************ 00:32:41.535 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:32:41.535 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:32:41.535 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:41.535 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:41.535 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:41.535 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2570668 00:32:41.535 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2570668 00:32:41.535 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:41.535 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2570668 ']' 00:32:41.535 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:41.535 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:41.535 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:41.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:41.535 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:41.535 07:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:41.535 [2024-11-27 07:27:52.552549] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:32:41.535 [2024-11-27 07:27:52.552599] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:41.535 [2024-11-27 07:27:52.640973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:41.535 [2024-11-27 07:27:52.669121] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:41.535 [2024-11-27 07:27:52.669155] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:41.535 [2024-11-27 07:27:52.669165] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:41.535 [2024-11-27 07:27:52.669171] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:41.535 [2024-11-27 07:27:52.669175] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:41.535 [2024-11-27 07:27:52.669651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:42.478 07:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:42.478 07:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:32:42.478 07:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:42.478 07:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:42.478 07:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:42.478 07:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:42.478 07:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:32:42.478 07:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.478 07:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:42.478 [2024-11-27 07:27:53.391619] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:32:42.478 07:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.478 07:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:32:42.478 07:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:32:42.478 07:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.478 07:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:42.478 null0 00:32:42.478 [2024-11-27 07:27:53.470359] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:42.478 [2024-11-27 07:27:53.494566] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:42.478 07:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.478 07:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:32:42.478 07:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:42.478 07:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:32:42.478 07:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:32:42.478 07:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:32:42.478 07:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2570940 00:32:42.478 07:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2570940 /var/tmp/bperf.sock 00:32:42.478 07:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2570940 ']' 00:32:42.478 07:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:32:42.478 07:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:42.478 07:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:42.478 07:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:42.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:42.478 07:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:42.478 07:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:42.478 [2024-11-27 07:27:53.550225] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:32:42.478 [2024-11-27 07:27:53.550273] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2570940 ] 00:32:42.478 [2024-11-27 07:27:53.634731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:42.478 [2024-11-27 07:27:53.664440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:43.420 07:27:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:43.420 07:27:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:32:43.420 07:27:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:43.420 07:27:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:43.420 07:27:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:43.420 07:27:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.420 07:27:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:43.420 07:27:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.420 07:27:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:43.420 07:27:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:43.680 nvme0n1 00:32:43.940 07:27:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:43.940 07:27:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.940 07:27:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:43.940 07:27:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.940 07:27:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:43.940 07:27:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:43.940 Running I/O for 2 seconds... 00:32:43.940 [2024-11-27 07:27:55.009912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:43.940 [2024-11-27 07:27:55.009945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.940 [2024-11-27 07:27:55.009954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.940 [2024-11-27 07:27:55.018079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:43.940 [2024-11-27 07:27:55.018106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.940 [2024-11-27 07:27:55.018114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.940 [2024-11-27 07:27:55.030420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:43.940 [2024-11-27 07:27:55.030439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.940 [2024-11-27 07:27:55.030446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.940 [2024-11-27 07:27:55.041023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:43.940 [2024-11-27 07:27:55.041041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.940 [2024-11-27 07:27:55.041047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.940 [2024-11-27 07:27:55.048944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:43.940 [2024-11-27 07:27:55.048962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.940 [2024-11-27 07:27:55.048968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.940 [2024-11-27 07:27:55.058723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:43.940 [2024-11-27 07:27:55.058741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.940 [2024-11-27 07:27:55.058747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.940 [2024-11-27 07:27:55.067698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:43.940 [2024-11-27 07:27:55.067715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.940 [2024-11-27 07:27:55.067722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.940 [2024-11-27 07:27:55.075897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:43.940 [2024-11-27 07:27:55.075914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.940 [2024-11-27 07:27:55.075921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.940 [2024-11-27 07:27:55.085188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:43.940 [2024-11-27 07:27:55.085206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.940 [2024-11-27 07:27:55.085213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.940 [2024-11-27 07:27:55.093251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:43.940 [2024-11-27 07:27:55.093269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.940 [2024-11-27 07:27:55.093275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.940 [2024-11-27 07:27:55.102667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:43.940 [2024-11-27 07:27:55.102685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.940 [2024-11-27 07:27:55.102691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.940 [2024-11-27 07:27:55.111435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:43.940 [2024-11-27 07:27:55.111452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.940 [2024-11-27 07:27:55.111458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.940 [2024-11-27 07:27:55.120058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:43.940 [2024-11-27 07:27:55.120075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.940 [2024-11-27 07:27:55.120081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.940 [2024-11-27 07:27:55.129530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:43.940 [2024-11-27 07:27:55.129547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.940 [2024-11-27 07:27:55.129554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.940 [2024-11-27 07:27:55.139117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:43.940 [2024-11-27 07:27:55.139134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.940 [2024-11-27 07:27:55.139140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.202 [2024-11-27 07:27:55.146745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.202 [2024-11-27 07:27:55.146762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.202 [2024-11-27 07:27:55.146768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.202 [2024-11-27 07:27:55.156418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.202 [2024-11-27 07:27:55.156436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.202 [2024-11-27 07:27:55.156442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.202 [2024-11-27 07:27:55.164938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.202 [2024-11-27 07:27:55.164956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.202 [2024-11-27 07:27:55.164962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.202 [2024-11-27 07:27:55.172899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.202 [2024-11-27 07:27:55.172916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.202 [2024-11-27 07:27:55.172926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.202 [2024-11-27 07:27:55.182291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.202 [2024-11-27 07:27:55.182308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.202 [2024-11-27 07:27:55.182315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.202 [2024-11-27 07:27:55.193249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.202 [2024-11-27 07:27:55.193266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.202 [2024-11-27 07:27:55.193273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.202 [2024-11-27 07:27:55.205325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.202 [2024-11-27 07:27:55.205343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.202 [2024-11-27 07:27:55.205349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.202 [2024-11-27 07:27:55.215133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.202 [2024-11-27 07:27:55.215151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.202 [2024-11-27 07:27:55.215162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.202 [2024-11-27 07:27:55.223510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.202 [2024-11-27 07:27:55.223527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.202 [2024-11-27 07:27:55.223533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.202 [2024-11-27 07:27:55.232301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.202 [2024-11-27 07:27:55.232319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.202 [2024-11-27 07:27:55.232326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.202 [2024-11-27 07:27:55.242036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.202 [2024-11-27 07:27:55.242053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.202 [2024-11-27 07:27:55.242059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.202 [2024-11-27 07:27:55.250767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.202 [2024-11-27 07:27:55.250784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.202 [2024-11-27 07:27:55.250791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.202 [2024-11-27 07:27:55.258418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.202 [2024-11-27 07:27:55.258435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.202 [2024-11-27 07:27:55.258441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.202 [2024-11-27 07:27:55.268188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.202 [2024-11-27 07:27:55.268206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.202 [2024-11-27 07:27:55.268212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.202 [2024-11-27 07:27:55.276277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.202 [2024-11-27 07:27:55.276295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.202 [2024-11-27 07:27:55.276302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.202 [2024-11-27 07:27:55.285695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.202 [2024-11-27 07:27:55.285712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.202 [2024-11-27 07:27:55.285718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.202 [2024-11-27 07:27:55.295080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.202 [2024-11-27 07:27:55.295097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.203 [2024-11-27 07:27:55.295104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.203 [2024-11-27 07:27:55.302742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.203 [2024-11-27 07:27:55.302759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.203 [2024-11-27 07:27:55.302766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.203 [2024-11-27 07:27:55.312261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.203 [2024-11-27 07:27:55.312278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.203 [2024-11-27 07:27:55.312284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.203 [2024-11-27 07:27:55.321276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.203 [2024-11-27 07:27:55.321293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.203 [2024-11-27 07:27:55.321300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.203 [2024-11-27 07:27:55.329619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.203 [2024-11-27 07:27:55.329636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.203 [2024-11-27 07:27:55.329646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.203 [2024-11-27 07:27:55.337793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.203 [2024-11-27 07:27:55.337809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.203 [2024-11-27 07:27:55.337816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.203 [2024-11-27 07:27:55.347494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.203 [2024-11-27 07:27:55.347511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.203 [2024-11-27 07:27:55.347517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.203 [2024-11-27 07:27:55.356720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.203 [2024-11-27 07:27:55.356738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.203 [2024-11-27 07:27:55.356744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.203 [2024-11-27 07:27:55.365662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.203 [2024-11-27 07:27:55.365679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.203 [2024-11-27 07:27:55.365685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.203 [2024-11-27 07:27:55.373871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.203 [2024-11-27 07:27:55.373888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.203 [2024-11-27 07:27:55.373894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.203 [2024-11-27 07:27:55.383252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.203 [2024-11-27 07:27:55.383270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.203 [2024-11-27 07:27:55.383276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.203 [2024-11-27 07:27:55.393081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.203 [2024-11-27 07:27:55.393099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.203 [2024-11-27 07:27:55.393105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.203 [2024-11-27 07:27:55.402767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.203 [2024-11-27 07:27:55.402784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.203 [2024-11-27 07:27:55.402790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.464 [2024-11-27 07:27:55.412058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.464 [2024-11-27 07:27:55.412079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.464 [2024-11-27 07:27:55.412086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.464 [2024-11-27 07:27:55.420463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.464 [2024-11-27 07:27:55.420480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.464 [2024-11-27 07:27:55.420487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.464 [2024-11-27 07:27:55.429501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.464 [2024-11-27 07:27:55.429518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.464 [2024-11-27 07:27:55.429524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.464 [2024-11-27 07:27:55.438122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.464 [2024-11-27 07:27:55.438139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.464 [2024-11-27 07:27:55.438145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.464 [2024-11-27 07:27:55.447031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.464 [2024-11-27 07:27:55.447048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.464 [2024-11-27 07:27:55.447054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.464 [2024-11-27 07:27:55.456605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.464 [2024-11-27 07:27:55.456622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.464 [2024-11-27 07:27:55.456628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.464 [2024-11-27 07:27:55.464871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.464 [2024-11-27 07:27:55.464888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.464 [2024-11-27 07:27:55.464894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.464 [2024-11-27 07:27:55.473042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.464 [2024-11-27 07:27:55.473059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.464 [2024-11-27 07:27:55.473066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.464 [2024-11-27 07:27:55.483709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.464 [2024-11-27 07:27:55.483727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.464 [2024-11-27 07:27:55.483733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.464 [2024-11-27 07:27:55.494243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.464 [2024-11-27 07:27:55.494260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.464 [2024-11-27 07:27:55.494266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.464 [2024-11-27 07:27:55.503648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.464 [2024-11-27 07:27:55.503665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.464 [2024-11-27 07:27:55.503671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.464 [2024-11-27 07:27:55.510753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.464 [2024-11-27 07:27:55.510770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.464 [2024-11-27 07:27:55.510777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.464 [2024-11-27 07:27:55.520581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.464 [2024-11-27 07:27:55.520598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.464 [2024-11-27 07:27:55.520604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.464 [2024-11-27 07:27:55.530075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.464 [2024-11-27 07:27:55.530092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.464 [2024-11-27 07:27:55.530099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.464 [2024-11-27 07:27:55.538643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.464 [2024-11-27 07:27:55.538659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.464 [2024-11-27 07:27:55.538666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.464 [2024-11-27 07:27:55.546695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.464 [2024-11-27 07:27:55.546711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.464 [2024-11-27 07:27:55.546717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.464 [2024-11-27 07:27:55.555065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.464 [2024-11-27 07:27:55.555082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.464 [2024-11-27 07:27:55.555088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.464 [2024-11-27 07:27:55.564211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.464 [2024-11-27 07:27:55.564227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.464 [2024-11-27 07:27:55.564237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.464 [2024-11-27 07:27:55.573849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.464 [2024-11-27 07:27:55.573866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.465 [2024-11-27 07:27:55.573873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.465 [2024-11-27 07:27:55.581293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.465 [2024-11-27 07:27:55.581309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.465 [2024-11-27 07:27:55.581315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.465 [2024-11-27 07:27:55.591728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.465 [2024-11-27 07:27:55.591744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.465 [2024-11-27 07:27:55.591751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.465 [2024-11-27 07:27:55.602225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.465 [2024-11-27 07:27:55.602242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.465 [2024-11-27 07:27:55.602248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.465 [2024-11-27 07:27:55.611665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.465 [2024-11-27 07:27:55.611682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.465 [2024-11-27 07:27:55.611689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.465 [2024-11-27 07:27:55.620072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.465 [2024-11-27 07:27:55.620089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.465 [2024-11-27 07:27:55.620095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.465 [2024-11-27 07:27:55.628394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.465 [2024-11-27 07:27:55.628411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.465 [2024-11-27 07:27:55.628418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.465 [2024-11-27 07:27:55.637613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.465 [2024-11-27 07:27:55.637630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.465 [2024-11-27 07:27:55.637636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.465 [2024-11-27 07:27:55.646807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.465 [2024-11-27 07:27:55.646828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.465 [2024-11-27 07:27:55.646834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.465 [2024-11-27 07:27:55.655350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.465 [2024-11-27 07:27:55.655366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.465 [2024-11-27 07:27:55.655372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.465 [2024-11-27 07:27:55.663969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.465 [2024-11-27 07:27:55.663986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.465 [2024-11-27 07:27:55.663992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.726 [2024-11-27 07:27:55.673094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.726 [2024-11-27 07:27:55.673112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.726 [2024-11-27 07:27:55.673118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.726 [2024-11-27 07:27:55.682837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.726 [2024-11-27 07:27:55.682854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.726 [2024-11-27 07:27:55.682860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.726 [2024-11-27 07:27:55.690632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.726 [2024-11-27 07:27:55.690648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.726 [2024-11-27 07:27:55.690654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.726 [2024-11-27 07:27:55.699988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.726 [2024-11-27 07:27:55.700005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.726 [2024-11-27 07:27:55.700011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.726 [2024-11-27 07:27:55.709426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.726 [2024-11-27 07:27:55.709442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.726 [2024-11-27 07:27:55.709449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.726 [2024-11-27 07:27:55.717211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.726 [2024-11-27 07:27:55.717228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.726 [2024-11-27 07:27:55.717238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.726 [2024-11-27 07:27:55.727031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.726 [2024-11-27 07:27:55.727048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.726 [2024-11-27 07:27:55.727054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.726 [2024-11-27 07:27:55.736989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.726 [2024-11-27 07:27:55.737006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.726 [2024-11-27 07:27:55.737012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.726 [2024-11-27 07:27:55.744719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.726 [2024-11-27 07:27:55.744736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.726 [2024-11-27 07:27:55.744742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.727 [2024-11-27 07:27:55.754074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.727 [2024-11-27 07:27:55.754091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.727 [2024-11-27 07:27:55.754097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.727 [2024-11-27 07:27:55.762403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.727 [2024-11-27 07:27:55.762420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.727 [2024-11-27 07:27:55.762426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.727 [2024-11-27 07:27:55.771490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.727 [2024-11-27 07:27:55.771506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.727 [2024-11-27 07:27:55.771513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.727 [2024-11-27 07:27:55.780140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.727 [2024-11-27 07:27:55.780161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.727 [2024-11-27 07:27:55.780168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.727 [2024-11-27 07:27:55.789427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.727 [2024-11-27 07:27:55.789444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.727 [2024-11-27 07:27:55.789450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.727 [2024-11-27 07:27:55.797675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.727 [2024-11-27 07:27:55.797695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.727 [2024-11-27 07:27:55.797701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.727 [2024-11-27 07:27:55.806128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.727 [2024-11-27 07:27:55.806145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.727 [2024-11-27 07:27:55.806151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.727 [2024-11-27 07:27:55.814880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.727 [2024-11-27 07:27:55.814896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.727 [2024-11-27 07:27:55.814902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.727 [2024-11-27 07:27:55.824030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.727 [2024-11-27 07:27:55.824047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.727 [2024-11-27 07:27:55.824053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.727 [2024-11-27 07:27:55.833478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.727 [2024-11-27 07:27:55.833495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.727 [2024-11-27 07:27:55.833502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.727 [2024-11-27 07:27:55.841834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.727 [2024-11-27 07:27:55.841850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.727 [2024-11-27 07:27:55.841857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.727 [2024-11-27 07:27:55.850507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.727 [2024-11-27 07:27:55.850524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.727 [2024-11-27 07:27:55.850530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.727 [2024-11-27 07:27:55.859543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.727 [2024-11-27 07:27:55.859560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.727 [2024-11-27 07:27:55.859566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.727 [2024-11-27 07:27:55.870127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.727 [2024-11-27 07:27:55.870144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.727 [2024-11-27 07:27:55.870151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.727 [2024-11-27 07:27:55.882552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.727 [2024-11-27 07:27:55.882569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.727 [2024-11-27 07:27:55.882575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.727 [2024-11-27 07:27:55.890002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.727 [2024-11-27 07:27:55.890018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.727 [2024-11-27 07:27:55.890025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.727 [2024-11-27 07:27:55.900999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.727 [2024-11-27 07:27:55.901016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.727 [2024-11-27 07:27:55.901022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.727 [2024-11-27 07:27:55.911078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.727 [2024-11-27 07:27:55.911095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.727 [2024-11-27 07:27:55.911101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.727 [2024-11-27 07:27:55.920346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.727 [2024-11-27 07:27:55.920363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.727 [2024-11-27 07:27:55.920370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.727 [2024-11-27 07:27:55.928784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.727 [2024-11-27 07:27:55.928800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.727 [2024-11-27 07:27:55.928807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.988 [2024-11-27 07:27:55.937880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.988 [2024-11-27 07:27:55.937897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.988 [2024-11-27 07:27:55.937904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.988 [2024-11-27 07:27:55.946935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.988 [2024-11-27 07:27:55.946951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.988 [2024-11-27 07:27:55.946958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.988 [2024-11-27 07:27:55.955527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.988 [2024-11-27 07:27:55.955543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.988 [2024-11-27 07:27:55.955553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.988 [2024-11-27 07:27:55.965486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.988 [2024-11-27 07:27:55.965502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.988 [2024-11-27 07:27:55.965508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.988 [2024-11-27 07:27:55.974241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.988 [2024-11-27 07:27:55.974257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.988 [2024-11-27 07:27:55.974263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.988 [2024-11-27 07:27:55.983228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.988 [2024-11-27 07:27:55.983245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.988 [2024-11-27 07:27:55.983251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.988 27743.00 IOPS, 108.37 MiB/s [2024-11-27T06:27:56.193Z] [2024-11-27 07:27:55.992212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.988 [2024-11-27 07:27:55.992228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.988 [2024-11-27 07:27:55.992234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.988 [2024-11-27 07:27:56.002135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.988 [2024-11-27 07:27:56.002151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.988 [2024-11-27 07:27:56.002161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.988 [2024-11-27 07:27:56.010886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.988 [2024-11-27 07:27:56.010903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.988 [2024-11-27 07:27:56.010909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.989 [2024-11-27 07:27:56.020833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.989 [2024-11-27 07:27:56.020850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.989 [2024-11-27 07:27:56.020856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.989 [2024-11-27 07:27:56.030344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.989 [2024-11-27 07:27:56.030361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.989 [2024-11-27 07:27:56.030367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.989 [2024-11-27 07:27:56.038193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.989 [2024-11-27 07:27:56.038209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.989 [2024-11-27 07:27:56.038215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.989 [2024-11-27 07:27:56.047869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.989 [2024-11-27 07:27:56.047886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.989 [2024-11-27 07:27:56.047892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.989 [2024-11-27 07:27:56.058151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.989 [2024-11-27 07:27:56.058171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.989 [2024-11-27 07:27:56.058177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.989 [2024-11-27 07:27:56.066714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.989 [2024-11-27 07:27:56.066731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.989 [2024-11-27 07:27:56.066737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.989 [2024-11-27 07:27:56.075027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.989 [2024-11-27 07:27:56.075044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.989 [2024-11-27 07:27:56.075050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.989 [2024-11-27 07:27:56.083437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.989 [2024-11-27 07:27:56.083453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.989 [2024-11-27 07:27:56.083460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.989 [2024-11-27 07:27:56.092779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.989 [2024-11-27 07:27:56.092796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.989 [2024-11-27 07:27:56.092802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.989 [2024-11-27 07:27:56.102617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.989 [2024-11-27 07:27:56.102634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.989 [2024-11-27 07:27:56.102640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.989 [2024-11-27 07:27:56.112277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.989 [2024-11-27 07:27:56.112294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.989 [2024-11-27 07:27:56.112303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.989 [2024-11-27 07:27:56.121703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.989 [2024-11-27 07:27:56.121720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.989 [2024-11-27 07:27:56.121726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.989 [2024-11-27 07:27:56.129713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.989 [2024-11-27 07:27:56.129729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.989 [2024-11-27 07:27:56.129736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.989 [2024-11-27 07:27:56.139711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.989 [2024-11-27 07:27:56.139728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.989 [2024-11-27 07:27:56.139734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.989 [2024-11-27 07:27:56.147902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.989 [2024-11-27 07:27:56.147918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.989 [2024-11-27 07:27:56.147924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.989 [2024-11-27 07:27:56.157678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.989 [2024-11-27 07:27:56.157694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.989 [2024-11-27 07:27:56.157701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.989 [2024-11-27 07:27:56.165996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.989 [2024-11-27 07:27:56.166012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.989 [2024-11-27 07:27:56.166018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.989 [2024-11-27 07:27:56.175208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.989 [2024-11-27 07:27:56.175224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.989 [2024-11-27 07:27:56.175231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:44.989 [2024-11-27 07:27:56.184315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:44.989 [2024-11-27 07:27:56.184331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.989 [2024-11-27 07:27:56.184337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.250 [2024-11-27 07:27:56.192539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.250 [2024-11-27 07:27:56.192560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.250 [2024-11-27 07:27:56.192566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.250 [2024-11-27 07:27:56.201831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.250 [2024-11-27 07:27:56.201848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.250 [2024-11-27 07:27:56.201854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.250 [2024-11-27 07:27:56.211106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.250 [2024-11-27 07:27:56.211122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.250 [2024-11-27 07:27:56.211129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.250 [2024-11-27 07:27:56.220870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.250 [2024-11-27 07:27:56.220887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.250 [2024-11-27 07:27:56.220894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.250 [2024-11-27 07:27:56.230775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.250 [2024-11-27 07:27:56.230792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.250 [2024-11-27 07:27:56.230798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.250 [2024-11-27 07:27:56.238259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.250 [2024-11-27 07:27:56.238276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.250 [2024-11-27 07:27:56.238282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.250 [2024-11-27 07:27:56.248537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.250 [2024-11-27 07:27:56.248554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.250 [2024-11-27 07:27:56.248560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.250 [2024-11-27 07:27:56.257257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.250 [2024-11-27 07:27:56.257273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.250 [2024-11-27 07:27:56.257279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.250 [2024-11-27 07:27:56.265905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.250 [2024-11-27 07:27:56.265921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.250 [2024-11-27 07:27:56.265928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.250 [2024-11-27 07:27:56.275076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.250 [2024-11-27 07:27:56.275093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.250 [2024-11-27 07:27:56.275099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.251 [2024-11-27 07:27:56.284037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.251 [2024-11-27 07:27:56.284054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.251 [2024-11-27 07:27:56.284060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.251 [2024-11-27 07:27:56.292382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.251 [2024-11-27 07:27:56.292398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.251 [2024-11-27 07:27:56.292405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.251 [2024-11-27 07:27:56.301410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.251 [2024-11-27 07:27:56.301426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.251 [2024-11-27 07:27:56.301432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.251 [2024-11-27 07:27:56.310666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.251 [2024-11-27 07:27:56.310682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.251 [2024-11-27 07:27:56.310689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.251 [2024-11-27 07:27:56.318777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.251 [2024-11-27 07:27:56.318793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.251 [2024-11-27 07:27:56.318800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.251 [2024-11-27 07:27:56.326748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.251 [2024-11-27 07:27:56.326764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.251 [2024-11-27 07:27:56.326770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.251 [2024-11-27 07:27:56.335883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.251 [2024-11-27 07:27:56.335900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.251 [2024-11-27 07:27:56.335906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.251 [2024-11-27 07:27:56.345478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.251 [2024-11-27 07:27:56.345495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.251 [2024-11-27 07:27:56.345504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.251 [2024-11-27 07:27:56.354743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.251 [2024-11-27 07:27:56.354760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.251 [2024-11-27 07:27:56.354766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.251 [2024-11-27 07:27:56.363518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.251 [2024-11-27 07:27:56.363534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.251 [2024-11-27 07:27:56.363541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.251 [2024-11-27 07:27:56.372337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.251 [2024-11-27 07:27:56.372353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.251 [2024-11-27 07:27:56.372360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.251 [2024-11-27 07:27:56.381436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.251 [2024-11-27 07:27:56.381453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.251 [2024-11-27 07:27:56.381459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.251 [2024-11-27 07:27:56.390435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.251 [2024-11-27 07:27:56.390451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.251 [2024-11-27 07:27:56.390457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.251 [2024-11-27 07:27:56.398254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.251 [2024-11-27 07:27:56.398271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.251 [2024-11-27 07:27:56.398277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.251 [2024-11-27 07:27:56.407914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.251 [2024-11-27 07:27:56.407931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.251 [2024-11-27 07:27:56.407937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.251 [2024-11-27 07:27:56.418759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.251 [2024-11-27 07:27:56.418775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.251 [2024-11-27 07:27:56.418781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.251 [2024-11-27 07:27:56.429404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.251 [2024-11-27 07:27:56.429424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.251 [2024-11-27 07:27:56.429431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.251 [2024-11-27 07:27:56.437395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.251 [2024-11-27 07:27:56.437412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.251 [2024-11-27 07:27:56.437419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.251 [2024-11-27 07:27:56.446504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.251 [2024-11-27 07:27:56.446521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.251 [2024-11-27 07:27:56.446527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.513 [2024-11-27 07:27:56.456163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.513 [2024-11-27 07:27:56.456180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.513 [2024-11-27 07:27:56.456187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.513 [2024-11-27 07:27:56.463982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.513 [2024-11-27 07:27:56.463999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.513 [2024-11-27 07:27:56.464005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.513 [2024-11-27 07:27:56.473506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.513 [2024-11-27 07:27:56.473523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.513 [2024-11-27 07:27:56.473530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.513 [2024-11-27 07:27:56.482136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.513 [2024-11-27 07:27:56.482153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.513 [2024-11-27 07:27:56.482164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.513 [2024-11-27 07:27:56.490384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.513 [2024-11-27 07:27:56.490401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.513 [2024-11-27 07:27:56.490408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.513 [2024-11-27 07:27:56.499455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.513 [2024-11-27 07:27:56.499472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.513 [2024-11-27 07:27:56.499478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.513 [2024-11-27 07:27:56.508255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.513 [2024-11-27 07:27:56.508273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.513 [2024-11-27 07:27:56.508279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.513 [2024-11-27 07:27:56.517253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.513 [2024-11-27 07:27:56.517270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.513 [2024-11-27 07:27:56.517276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.513 [2024-11-27 07:27:56.526355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.513 [2024-11-27 07:27:56.526372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.513 [2024-11-27 07:27:56.526378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.513 [2024-11-27 07:27:56.535955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.513 [2024-11-27 07:27:56.535972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.513 [2024-11-27 07:27:56.535978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.513 [2024-11-27 07:27:56.544740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.513 [2024-11-27 07:27:56.544757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.513 [2024-11-27 07:27:56.544763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.513 [2024-11-27 07:27:56.553565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.513 [2024-11-27 07:27:56.553581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.513 [2024-11-27 07:27:56.553587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.513 [2024-11-27 07:27:56.561797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.513 [2024-11-27 07:27:56.561814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.513 [2024-11-27 07:27:56.561820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.513 [2024-11-27 07:27:56.570838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.513 [2024-11-27 07:27:56.570855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.513 [2024-11-27 07:27:56.570862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.513 [2024-11-27 07:27:56.578762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.513 [2024-11-27 07:27:56.578782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.513 [2024-11-27 07:27:56.578789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.513 [2024-11-27 07:27:56.590022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.513 [2024-11-27 07:27:56.590040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.513 [2024-11-27 07:27:56.590046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.513 [2024-11-27 07:27:56.602993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.513 [2024-11-27 07:27:56.603010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.513 [2024-11-27 07:27:56.603016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.513 [2024-11-27 07:27:56.611252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.513 [2024-11-27 07:27:56.611269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.513 [2024-11-27 07:27:56.611275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.513 [2024-11-27 07:27:56.622707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.513 [2024-11-27 07:27:56.622724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.513 [2024-11-27 07:27:56.622730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.513 [2024-11-27 07:27:56.631340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.513 [2024-11-27 07:27:56.631357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.513 [2024-11-27 07:27:56.631363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.513 [2024-11-27 07:27:56.639952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.513 [2024-11-27 07:27:56.639969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.513 [2024-11-27 07:27:56.639975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.513 [2024-11-27 07:27:56.648282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.513 [2024-11-27 07:27:56.648299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.513 [2024-11-27 07:27:56.648305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.513 [2024-11-27 07:27:56.657789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.513 [2024-11-27 07:27:56.657806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.513 [2024-11-27 07:27:56.657812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.514 [2024-11-27 07:27:56.666062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.514 [2024-11-27 07:27:56.666079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.514 [2024-11-27 07:27:56.666085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.514 [2024-11-27 07:27:56.675070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.514 [2024-11-27 07:27:56.675086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.514 [2024-11-27 07:27:56.675093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.514 [2024-11-27 07:27:56.683998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.514 [2024-11-27 07:27:56.684015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.514 [2024-11-27 07:27:56.684022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.514 [2024-11-27 07:27:56.692935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.514 [2024-11-27 07:27:56.692953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.514 [2024-11-27 07:27:56.692960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.514 [2024-11-27 07:27:56.701198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.514 [2024-11-27 07:27:56.701215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.514 [2024-11-27 07:27:56.701221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.514 [2024-11-27 07:27:56.710100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.514 [2024-11-27 07:27:56.710118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.514 [2024-11-27 07:27:56.710124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.776 [2024-11-27 07:27:56.720475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.776 [2024-11-27 07:27:56.720492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.776 [2024-11-27 07:27:56.720499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.776 [2024-11-27 07:27:56.730285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.776 [2024-11-27 07:27:56.730302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.776 [2024-11-27 07:27:56.730309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.776 [2024-11-27 07:27:56.737892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.776 [2024-11-27 07:27:56.737908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.776 [2024-11-27 07:27:56.737918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.776 [2024-11-27 07:27:56.747125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.776 [2024-11-27 07:27:56.747142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.776 [2024-11-27 07:27:56.747148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.776 [2024-11-27 07:27:56.756639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.776 [2024-11-27 07:27:56.756656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.776 [2024-11-27 07:27:56.756663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.776 [2024-11-27 07:27:56.765275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.776 [2024-11-27 07:27:56.765293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.776 [2024-11-27 07:27:56.765299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.776 [2024-11-27 07:27:56.773588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.776 [2024-11-27 07:27:56.773605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.776 [2024-11-27 07:27:56.773611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.776 [2024-11-27 07:27:56.782453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.776 [2024-11-27 07:27:56.782469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.776 [2024-11-27 07:27:56.782475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.776 [2024-11-27 07:27:56.791744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.776 [2024-11-27 07:27:56.791760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.776 [2024-11-27 07:27:56.791767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.776 [2024-11-27 07:27:56.799494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.776 [2024-11-27 07:27:56.799512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.776 [2024-11-27 07:27:56.799519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.776 [2024-11-27 07:27:56.808314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.776 [2024-11-27 07:27:56.808332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.776 [2024-11-27 07:27:56.808338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.777 [2024-11-27 07:27:56.817537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.777 [2024-11-27 07:27:56.817557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-11-27 07:27:56.817563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.777 [2024-11-27 07:27:56.827000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.777 [2024-11-27 07:27:56.827018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-11-27 07:27:56.827024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.777 [2024-11-27 07:27:56.836623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.777 [2024-11-27 07:27:56.836639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-11-27 07:27:56.836646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.777 [2024-11-27 07:27:56.844353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.777 [2024-11-27 07:27:56.844369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-11-27 07:27:56.844375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.777 [2024-11-27 07:27:56.853885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.777 [2024-11-27 07:27:56.853902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-11-27 07:27:56.853908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.777 [2024-11-27 07:27:56.862629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.777 [2024-11-27 07:27:56.862645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-11-27 07:27:56.862651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.777 [2024-11-27 07:27:56.871293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.777 [2024-11-27 07:27:56.871309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-11-27 07:27:56.871316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.777 [2024-11-27 07:27:56.879662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.777 [2024-11-27 07:27:56.879679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-11-27 07:27:56.879686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.777 [2024-11-27 07:27:56.888459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.777 [2024-11-27 07:27:56.888476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-11-27 07:27:56.888483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.777 [2024-11-27 07:27:56.897399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.777 [2024-11-27 07:27:56.897416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-11-27 07:27:56.897422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.777 [2024-11-27 07:27:56.906528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.777 [2024-11-27 07:27:56.906545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-11-27 07:27:56.906551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.777 [2024-11-27 07:27:56.914661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.777 [2024-11-27 07:27:56.914678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-11-27 07:27:56.914684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.777 [2024-11-27 07:27:56.924244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.777 [2024-11-27 07:27:56.924261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-11-27 07:27:56.924267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.777 [2024-11-27 07:27:56.933886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.777 [2024-11-27 07:27:56.933903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-11-27 07:27:56.933909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.777 [2024-11-27 07:27:56.942236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.777 [2024-11-27 07:27:56.942253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-11-27 07:27:56.942259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.777 [2024-11-27 07:27:56.950584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.777 [2024-11-27 07:27:56.950601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-11-27 07:27:56.950607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.777 [2024-11-27 07:27:56.959608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.777 [2024-11-27 07:27:56.959624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-11-27 07:27:56.959631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.777 [2024-11-27 07:27:56.968374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.777 [2024-11-27 07:27:56.968394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-11-27 07:27:56.968400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.777 [2024-11-27 07:27:56.977725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:45.777 [2024-11-27 07:27:56.977742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-11-27 07:27:56.977748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.038 [2024-11-27 07:27:56.986949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:46.038 [2024-11-27 07:27:56.986966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.038 [2024-11-27 07:27:56.986973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.038 28021.00 IOPS, 109.46 MiB/s [2024-11-27T06:27:57.243Z] [2024-11-27 07:27:56.995068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23a6190) 00:32:46.038 [2024-11-27 07:27:56.995085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.038 [2024-11-27 07:27:56.995091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.038 00:32:46.038 Latency(us) 00:32:46.038 [2024-11-27T06:27:57.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:46.038 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:46.038 nvme0n1 : 2.00 28031.28 109.50 0.00 0.00 4560.91 2307.41 18022.40 00:32:46.038 [2024-11-27T06:27:57.243Z] =================================================================================================================== 00:32:46.038 [2024-11-27T06:27:57.243Z] Total : 28031.28 109.50 0.00 0.00 4560.91 2307.41 18022.40 00:32:46.038 { 00:32:46.038 "results": [ 00:32:46.038 { 00:32:46.038 "job": "nvme0n1", 00:32:46.038 "core_mask": "0x2", 00:32:46.038 "workload": "randread", 00:32:46.038 "status": "finished", 00:32:46.038 "queue_depth": 128, 00:32:46.038 "io_size": 4096, 00:32:46.038 "runtime": 2.003833, 00:32:46.038 "iops": 28031.27805560643, 00:32:46.038 "mibps": 109.49717990471262, 00:32:46.038 "io_failed": 0, 00:32:46.038 "io_timeout": 0, 00:32:46.038 "avg_latency_us": 4560.906816687438, 00:32:46.038 "min_latency_us": 2307.4133333333334, 00:32:46.038 "max_latency_us": 18022.4 00:32:46.038 } 00:32:46.038 ], 00:32:46.038 "core_count": 1 00:32:46.038 } 00:32:46.038 07:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:46.038 07:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:46.038 07:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:46.038 | .driver_specific 00:32:46.038 | .nvme_error 00:32:46.038 | .status_code 00:32:46.038 | .command_transient_transport_error' 00:32:46.038 07:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:46.038 07:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 220 > 0 )) 00:32:46.038 07:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2570940 00:32:46.038 07:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2570940 ']' 00:32:46.038 07:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2570940 00:32:46.038 07:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:32:46.038 07:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:46.038 07:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2570940 00:32:46.299 07:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:46.299 07:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:46.299 07:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2570940' 00:32:46.299 killing process with pid 2570940 00:32:46.299 07:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2570940 00:32:46.299 Received shutdown signal, test time was about 2.000000 seconds 00:32:46.299 00:32:46.299 Latency(us) 00:32:46.299 [2024-11-27T06:27:57.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:46.299 [2024-11-27T06:27:57.504Z] =================================================================================================================== 00:32:46.299 [2024-11-27T06:27:57.504Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:46.299 07:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2570940 00:32:46.299 07:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:32:46.299 07:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:46.299 07:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:32:46.299 07:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:32:46.299 07:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:32:46.299 07:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2571702 00:32:46.299 07:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2571702 /var/tmp/bperf.sock 00:32:46.299 07:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2571702 ']' 00:32:46.299 07:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:32:46.299 07:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:46.299 07:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:46.299 07:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:46.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:46.299 07:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:46.299 07:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:46.299 [2024-11-27 07:27:57.418646] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:32:46.299 [2024-11-27 07:27:57.418699] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2571702 ] 00:32:46.299 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:46.299 Zero copy mechanism will not be used. 00:32:46.559 [2024-11-27 07:27:57.502655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:46.559 [2024-11-27 07:27:57.530835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:47.130 07:27:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:47.130 07:27:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:32:47.130 07:27:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:47.130 07:27:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:47.391 07:27:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:47.391 07:27:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.391 07:27:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:47.391 07:27:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.391 07:27:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:47.391 07:27:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:47.652 nvme0n1 00:32:47.652 07:27:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:47.652 07:27:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.652 07:27:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:47.652 07:27:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.652 07:27:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:47.652 07:27:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:47.913 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:47.913 Zero copy mechanism will not be used. 00:32:47.913 Running I/O for 2 seconds... 00:32:47.913 [2024-11-27 07:27:58.881775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:47.913 [2024-11-27 07:27:58.881811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.913 [2024-11-27 07:27:58.881821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:47.913 [2024-11-27 07:27:58.893050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:47.913 [2024-11-27 07:27:58.893075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.913 [2024-11-27 07:27:58.893082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:47.913 [2024-11-27 07:27:58.904292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:47.914 [2024-11-27 07:27:58.904312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.914 [2024-11-27 07:27:58.904319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:47.914 [2024-11-27 07:27:58.915361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:47.914 [2024-11-27 07:27:58.915381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.914 [2024-11-27 07:27:58.915387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:47.914 [2024-11-27 07:27:58.926970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:47.914 [2024-11-27 07:27:58.926989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.914 [2024-11-27 07:27:58.926996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:47.914 [2024-11-27 07:27:58.936782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:47.914 [2024-11-27 07:27:58.936801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.914 [2024-11-27 07:27:58.936807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:47.914 [2024-11-27 07:27:58.947518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:47.914 [2024-11-27 07:27:58.947537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.914 [2024-11-27 07:27:58.947543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:47.914 [2024-11-27 07:27:58.956388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:47.914 [2024-11-27 07:27:58.956405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.914 [2024-11-27 07:27:58.956412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:47.914 [2024-11-27 07:27:58.963821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:47.914 [2024-11-27 07:27:58.963840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.914 [2024-11-27 07:27:58.963846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:47.914 [2024-11-27 07:27:58.971991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:47.914 [2024-11-27 07:27:58.972010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.914 [2024-11-27 07:27:58.972017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:47.914 [2024-11-27 07:27:58.982365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:47.914 [2024-11-27 07:27:58.982382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.914 [2024-11-27 07:27:58.982389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:47.914 [2024-11-27 07:27:58.993150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:47.914 [2024-11-27 07:27:58.993174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.914 [2024-11-27 07:27:58.993181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:47.914 [2024-11-27 07:27:59.003786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:47.914 [2024-11-27 07:27:59.003805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.914 [2024-11-27 07:27:59.003815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:47.914 [2024-11-27 07:27:59.015100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:47.914 [2024-11-27 07:27:59.015119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.914 [2024-11-27 07:27:59.015125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:47.914 [2024-11-27 07:27:59.022726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:47.914 [2024-11-27 07:27:59.022744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.914 [2024-11-27 07:27:59.022750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:47.914 [2024-11-27 07:27:59.027259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:47.914 [2024-11-27 07:27:59.027277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.914 [2024-11-27 07:27:59.027283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:47.914 [2024-11-27 07:27:59.032229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:47.914 [2024-11-27 07:27:59.032247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.914 [2024-11-27 07:27:59.032254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:47.914 [2024-11-27 07:27:59.036603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:47.914 [2024-11-27 07:27:59.036620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.914 [2024-11-27 07:27:59.036626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:47.914 [2024-11-27 07:27:59.042666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:47.914 [2024-11-27 07:27:59.042684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.914 [2024-11-27 07:27:59.042691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:47.914 [2024-11-27 07:27:59.050844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:47.914 [2024-11-27 07:27:59.050862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.914 [2024-11-27 07:27:59.050868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:47.914 [2024-11-27 07:27:59.056990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:47.914 [2024-11-27 07:27:59.057009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.914 [2024-11-27 07:27:59.057015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:47.914 [2024-11-27 07:27:59.063323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:47.914 [2024-11-27 07:27:59.063345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.914 [2024-11-27 07:27:59.063351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:47.914 [2024-11-27 07:27:59.071366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:47.914 [2024-11-27 07:27:59.071385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.914 [2024-11-27 07:27:59.071391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:47.914 [2024-11-27 07:27:59.078113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:47.914 [2024-11-27 07:27:59.078132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.914 [2024-11-27 07:27:59.078138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:47.914 [2024-11-27 07:27:59.086849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:47.914 [2024-11-27 07:27:59.086868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.914 [2024-11-27 07:27:59.086874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:47.914 [2024-11-27 07:27:59.095657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:47.914 [2024-11-27 07:27:59.095675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.914 [2024-11-27 07:27:59.095681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:47.914 [2024-11-27 07:27:59.104009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:47.914 [2024-11-27 07:27:59.104027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.914 [2024-11-27 07:27:59.104033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:47.914 [2024-11-27 07:27:59.110239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:47.914 [2024-11-27 07:27:59.110256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.914 [2024-11-27 07:27:59.110262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:47.914 [2024-11-27 07:27:59.114570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:47.914 [2024-11-27 07:27:59.114588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.914 [2024-11-27 07:27:59.114594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.177 [2024-11-27 07:27:59.122699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.177 [2024-11-27 07:27:59.122718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.177 [2024-11-27 07:27:59.122724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.177 [2024-11-27 07:27:59.133151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.177 [2024-11-27 07:27:59.133173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.177 [2024-11-27 07:27:59.133180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:48.177 [2024-11-27 07:27:59.143074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.177 [2024-11-27 07:27:59.143092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.177 [2024-11-27 07:27:59.143098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:48.177 [2024-11-27 07:27:59.154167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.177 [2024-11-27 07:27:59.154185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.177 [2024-11-27 07:27:59.154191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.177 [2024-11-27 07:27:59.162859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.177 [2024-11-27 07:27:59.162877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.178 [2024-11-27 07:27:59.162883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.178 [2024-11-27 07:27:59.166644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.178 [2024-11-27 07:27:59.166661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.178 [2024-11-27 07:27:59.166668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:48.178 [2024-11-27 07:27:59.172285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.178 [2024-11-27 07:27:59.172303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.178 [2024-11-27 07:27:59.172309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:48.178 [2024-11-27 07:27:59.180646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.178 [2024-11-27 07:27:59.180664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.178 [2024-11-27 07:27:59.180671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.178 [2024-11-27 07:27:59.188488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.178 [2024-11-27 07:27:59.188507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.178 [2024-11-27 07:27:59.188513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.178 [2024-11-27 07:27:59.196285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.178 [2024-11-27 07:27:59.196303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.178 [2024-11-27 07:27:59.196316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:48.178 [2024-11-27 07:27:59.203082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.178 [2024-11-27 07:27:59.203101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.179 [2024-11-27 07:27:59.203107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:48.179 [2024-11-27 07:27:59.209833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.179 [2024-11-27 07:27:59.209851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.179 [2024-11-27 07:27:59.209857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.179 [2024-11-27 07:27:59.220104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.179 [2024-11-27 07:27:59.220122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.179 [2024-11-27 07:27:59.220129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.179 [2024-11-27 07:27:59.228003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.179 [2024-11-27 07:27:59.228021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.179 [2024-11-27 07:27:59.228027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:48.179 [2024-11-27 07:27:59.232698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.179 [2024-11-27 07:27:59.232716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.179 [2024-11-27 07:27:59.232722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:48.179 [2024-11-27 07:27:59.240243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.179 [2024-11-27 07:27:59.240261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.179 [2024-11-27 07:27:59.240267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.179 [2024-11-27 07:27:59.247166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.180 [2024-11-27 07:27:59.247185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.180 [2024-11-27 07:27:59.247191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.180 [2024-11-27 07:27:59.255031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.180 [2024-11-27 07:27:59.255050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.180 [2024-11-27 07:27:59.255056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:48.180 [2024-11-27 07:27:59.259352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.180 [2024-11-27 07:27:59.259370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.180 [2024-11-27 07:27:59.259376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:48.180 [2024-11-27 07:27:59.266727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.180 [2024-11-27 07:27:59.266745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.180 [2024-11-27 07:27:59.266751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.180 [2024-11-27 07:27:59.275212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.180 [2024-11-27 07:27:59.275230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.180 [2024-11-27 07:27:59.275237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.180 [2024-11-27 07:27:59.282243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.181 [2024-11-27 07:27:59.282261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.181 [2024-11-27 07:27:59.282267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:48.181 [2024-11-27 07:27:59.290479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.181 [2024-11-27 07:27:59.290497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.181 [2024-11-27 07:27:59.290504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:48.181 [2024-11-27 07:27:59.299101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.181 [2024-11-27 07:27:59.299119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.181 [2024-11-27 07:27:59.299126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.181 [2024-11-27 07:27:59.305752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.181 [2024-11-27 07:27:59.305771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.181 [2024-11-27 07:27:59.305777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.181 [2024-11-27 07:27:59.313201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.181 [2024-11-27 07:27:59.313218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.181 [2024-11-27 07:27:59.313225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:48.181 [2024-11-27 07:27:59.321256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.181 [2024-11-27 07:27:59.321274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.182 [2024-11-27 07:27:59.321280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:48.182 [2024-11-27 07:27:59.330815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.182 [2024-11-27 07:27:59.330833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.182 [2024-11-27 07:27:59.330840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.182 [2024-11-27 07:27:59.340457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.182 [2024-11-27 07:27:59.340474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.182 [2024-11-27 07:27:59.340481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.182 [2024-11-27 07:27:59.348834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.182 [2024-11-27 07:27:59.348852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.182 [2024-11-27 07:27:59.348858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:48.182 [2024-11-27 07:27:59.353984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.182 [2024-11-27 07:27:59.354002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.182 [2024-11-27 07:27:59.354008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:48.182 [2024-11-27 07:27:59.358852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.182 [2024-11-27 07:27:59.358870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.182 [2024-11-27 07:27:59.358877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.182 [2024-11-27 07:27:59.363947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.182 [2024-11-27 07:27:59.363965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.182 [2024-11-27 07:27:59.363971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.182 [2024-11-27 07:27:59.371347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.182 [2024-11-27 07:27:59.371365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.182 [2024-11-27 07:27:59.371371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:48.182 [2024-11-27 07:27:59.377302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.183 [2024-11-27 07:27:59.377320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.183 [2024-11-27 07:27:59.377326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:48.447 [2024-11-27 07:27:59.384925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.447 [2024-11-27 07:27:59.384947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.447 [2024-11-27 07:27:59.384953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.447 [2024-11-27 07:27:59.387880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.447 [2024-11-27 07:27:59.387897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.447 [2024-11-27 07:27:59.387903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.447 [2024-11-27 07:27:59.393995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.447 [2024-11-27 07:27:59.394013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.447 [2024-11-27 07:27:59.394020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:48.447 [2024-11-27 07:27:59.403040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.447 [2024-11-27 07:27:59.403059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.447 [2024-11-27 07:27:59.403065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:48.447 [2024-11-27 07:27:59.410675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.447 [2024-11-27 07:27:59.410693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.447 [2024-11-27 07:27:59.410700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.447 [2024-11-27 07:27:59.416292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.447 [2024-11-27 07:27:59.416311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.447 [2024-11-27 07:27:59.416317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.447 [2024-11-27 07:27:59.423248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.447 [2024-11-27 07:27:59.423266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.447 [2024-11-27 07:27:59.423272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:48.447 [2024-11-27 07:27:59.433595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.447 [2024-11-27 07:27:59.433614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.447 [2024-11-27 07:27:59.433621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:48.447 [2024-11-27 07:27:59.438849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.447 [2024-11-27 07:27:59.438867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.447 [2024-11-27 07:27:59.438873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.447 [2024-11-27 07:27:59.449184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.447 [2024-11-27 07:27:59.449202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.447 [2024-11-27 07:27:59.449208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.447 [2024-11-27 07:27:59.459285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.447 [2024-11-27 07:27:59.459302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.447 [2024-11-27 07:27:59.459309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:48.447 [2024-11-27 07:27:59.465753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.447 [2024-11-27 07:27:59.465770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.447 [2024-11-27 07:27:59.465776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:48.447 [2024-11-27 07:27:59.474806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.447 [2024-11-27 07:27:59.474824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.447 [2024-11-27 07:27:59.474830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.447 [2024-11-27 07:27:59.482201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.447 [2024-11-27 07:27:59.482219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.447 [2024-11-27 07:27:59.482226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.447 [2024-11-27 07:27:59.490167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.447 [2024-11-27 07:27:59.490185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.447 [2024-11-27 07:27:59.490191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:48.447 [2024-11-27 07:27:59.497239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.447 [2024-11-27 07:27:59.497257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.447 [2024-11-27 07:27:59.497264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:48.447 [2024-11-27 07:27:59.501138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.447 [2024-11-27 07:27:59.501156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.447 [2024-11-27 07:27:59.501168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.447 [2024-11-27 07:27:59.508186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.447 [2024-11-27 07:27:59.508204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.447 [2024-11-27 07:27:59.508213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.447 [2024-11-27 07:27:59.518369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.447 [2024-11-27 07:27:59.518387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.447 [2024-11-27 07:27:59.518393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:48.447 [2024-11-27 07:27:59.525683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.448 [2024-11-27 07:27:59.525701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.448 [2024-11-27 07:27:59.525707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:48.448 [2024-11-27 07:27:59.530283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.448 [2024-11-27 07:27:59.530301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.448 [2024-11-27 07:27:59.530307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.448 [2024-11-27 07:27:59.534965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.448 [2024-11-27 07:27:59.534983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.448 [2024-11-27 07:27:59.534989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.448 [2024-11-27 07:27:59.540880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.448 [2024-11-27 07:27:59.540898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.448 [2024-11-27 07:27:59.540904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:48.448 [2024-11-27 07:27:59.548677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.448 [2024-11-27 07:27:59.548695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.448 [2024-11-27 07:27:59.548701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:48.448 [2024-11-27 07:27:59.553229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.448 [2024-11-27 07:27:59.553247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.448 [2024-11-27 07:27:59.553253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.448 [2024-11-27 07:27:59.559088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.448 [2024-11-27 07:27:59.559106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.448 [2024-11-27 07:27:59.559112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.448 [2024-11-27 07:27:59.565141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.448 [2024-11-27 07:27:59.565167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.448 [2024-11-27 07:27:59.565174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:48.448 [2024-11-27 07:27:59.572781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.448 [2024-11-27 07:27:59.572799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.448 [2024-11-27 07:27:59.572805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:48.448 [2024-11-27 07:27:59.577359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.448 [2024-11-27 07:27:59.577377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.448 [2024-11-27 07:27:59.577383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.448 [2024-11-27 07:27:59.582046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.448 [2024-11-27 07:27:59.582064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.448 [2024-11-27 07:27:59.582070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.448 [2024-11-27 07:27:59.588416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.448 [2024-11-27 07:27:59.588434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.448 [2024-11-27 07:27:59.588441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:48.448 [2024-11-27 07:27:59.595446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.448 [2024-11-27 07:27:59.595465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.448 [2024-11-27 07:27:59.595472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:48.448 [2024-11-27 07:27:59.598828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.448 [2024-11-27 07:27:59.598845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.448 [2024-11-27 07:27:59.598852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.448 [2024-11-27 07:27:59.603091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.448 [2024-11-27 07:27:59.603109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.448 [2024-11-27 07:27:59.603115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.448 [2024-11-27 07:27:59.610632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.448 [2024-11-27 07:27:59.610650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.448 [2024-11-27 07:27:59.610657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:48.448 [2024-11-27 07:27:59.616220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.448 [2024-11-27 07:27:59.616237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.448 [2024-11-27 07:27:59.616243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:48.448 [2024-11-27 07:27:59.623774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.448 [2024-11-27 07:27:59.623792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.448 [2024-11-27 07:27:59.623798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.448 [2024-11-27 07:27:59.632823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.448 [2024-11-27 07:27:59.632841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.448 [2024-11-27 07:27:59.632847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.448 [2024-11-27 07:27:59.637209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.448 [2024-11-27 07:27:59.637227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.448 [2024-11-27 07:27:59.637233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:48.448 [2024-11-27 07:27:59.645181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.448 [2024-11-27 07:27:59.645200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.448 [2024-11-27 07:27:59.645207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:48.710 [2024-11-27 07:27:59.653942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.710 [2024-11-27 07:27:59.653961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.710 [2024-11-27 07:27:59.653968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.710 [2024-11-27 07:27:59.660058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.710 [2024-11-27 07:27:59.660076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.710 [2024-11-27 07:27:59.660082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.710 [2024-11-27 07:27:59.665488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.710 [2024-11-27 07:27:59.665507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.711 [2024-11-27 07:27:59.665513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:48.711 [2024-11-27 07:27:59.670994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.711 [2024-11-27 07:27:59.671012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.711 [2024-11-27 07:27:59.671022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:48.711 [2024-11-27 07:27:59.679219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.711 [2024-11-27 07:27:59.679237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.711 [2024-11-27 07:27:59.679244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.711 [2024-11-27 07:27:59.686465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.711 [2024-11-27 07:27:59.686483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.711 [2024-11-27 07:27:59.686489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.711 [2024-11-27 07:27:59.690539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.711 [2024-11-27 07:27:59.690556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.711 [2024-11-27 07:27:59.690562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:48.711 [2024-11-27 07:27:59.697792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.711 [2024-11-27 07:27:59.697809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.711 [2024-11-27 07:27:59.697815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:48.711 [2024-11-27 07:27:59.703370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.711 [2024-11-27 07:27:59.703387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.711 [2024-11-27 07:27:59.703393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.711 [2024-11-27 07:27:59.707483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.711 [2024-11-27 07:27:59.707499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.711 [2024-11-27 07:27:59.707505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.711 [2024-11-27 07:27:59.711654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.711 [2024-11-27 07:27:59.711672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.711 [2024-11-27 07:27:59.711678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:48.711 [2024-11-27 07:27:59.716373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.711 [2024-11-27 07:27:59.716391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.711 [2024-11-27 07:27:59.716397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:48.711 [2024-11-27 07:27:59.723141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.711 [2024-11-27 07:27:59.723173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.711 [2024-11-27 07:27:59.723180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.711 [2024-11-27 07:27:59.727135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.711 [2024-11-27 07:27:59.727153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.711 [2024-11-27 07:27:59.727165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.711 [2024-11-27 07:27:59.731278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.711 [2024-11-27 07:27:59.731295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.711 [2024-11-27 07:27:59.731302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:48.711 [2024-11-27 07:27:59.739336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.711 [2024-11-27 07:27:59.739355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.711 [2024-11-27 07:27:59.739361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:48.711 [2024-11-27 07:27:59.747687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.711 [2024-11-27 07:27:59.747705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.711 [2024-11-27 07:27:59.747712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.711 [2024-11-27 07:27:59.752227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.711 [2024-11-27 07:27:59.752245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.711 [2024-11-27 07:27:59.752251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.711 [2024-11-27 07:27:59.757009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.711 [2024-11-27 07:27:59.757027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.711 [2024-11-27 07:27:59.757034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:48.711 [2024-11-27 07:27:59.762082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.711 [2024-11-27 07:27:59.762100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.711 [2024-11-27 07:27:59.762106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:48.711 [2024-11-27 07:27:59.766617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.711 [2024-11-27 07:27:59.766635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.711 [2024-11-27 07:27:59.766641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.711 [2024-11-27 07:27:59.771292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.711 [2024-11-27 07:27:59.771310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.711 [2024-11-27 07:27:59.771316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.711 [2024-11-27 07:27:59.778157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.711 [2024-11-27 07:27:59.778179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.711 [2024-11-27 07:27:59.778185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:48.711 [2024-11-27 07:27:59.782861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.711 [2024-11-27 07:27:59.782878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.711 [2024-11-27 07:27:59.782885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:48.711 [2024-11-27 07:27:59.788011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.711 [2024-11-27 07:27:59.788028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.711 [2024-11-27 07:27:59.788034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.711 [2024-11-27 07:27:59.794361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.711 [2024-11-27 07:27:59.794378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.711 [2024-11-27 07:27:59.794385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.711 [2024-11-27 07:27:59.804184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.711 [2024-11-27 07:27:59.804202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.711 [2024-11-27 07:27:59.804208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:48.711 [2024-11-27 07:27:59.813445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.711 [2024-11-27 07:27:59.813463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.711 [2024-11-27 07:27:59.813469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:48.712 [2024-11-27 07:27:59.821373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.712 [2024-11-27 07:27:59.821392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.712 [2024-11-27 07:27:59.821398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.712 [2024-11-27 07:27:59.827884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.712 [2024-11-27 07:27:59.827903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.712 [2024-11-27 07:27:59.827912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.712 [2024-11-27 07:27:59.832344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.712 [2024-11-27 07:27:59.832362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.712 [2024-11-27 07:27:59.832368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:48.712 [2024-11-27 07:27:59.841589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.712 [2024-11-27 07:27:59.841607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.712 [2024-11-27 07:27:59.841613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:48.712 [2024-11-27 07:27:59.846301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.712 [2024-11-27 07:27:59.846319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.712 [2024-11-27 07:27:59.846325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.712 [2024-11-27 07:27:59.852240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.712 [2024-11-27 07:27:59.852258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.712 [2024-11-27 07:27:59.852264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.712 [2024-11-27 07:27:59.862410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.712 [2024-11-27 07:27:59.862428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.712 [2024-11-27 07:27:59.862434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:48.712 [2024-11-27 07:27:59.871584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.712 [2024-11-27 07:27:59.871603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.712 [2024-11-27 07:27:59.871609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:48.712 4321.00 IOPS, 540.12 MiB/s [2024-11-27T06:27:59.917Z] [2024-11-27 07:27:59.882166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.712 [2024-11-27 07:27:59.882184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.712 [2024-11-27 07:27:59.882190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.712 [2024-11-27 07:27:59.892153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.712 [2024-11-27 07:27:59.892176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.712 [2024-11-27 07:27:59.892182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.712 [2024-11-27 07:27:59.897392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.712 [2024-11-27 07:27:59.897410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.712 [2024-11-27 07:27:59.897417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:48.712 [2024-11-27 07:27:59.904550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.712 [2024-11-27 07:27:59.904568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.712 [2024-11-27 07:27:59.904574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:48.976 [2024-11-27 07:27:59.916176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.976 [2024-11-27 07:27:59.916195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.976 [2024-11-27 07:27:59.916201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.976 [2024-11-27 07:27:59.922432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.976 [2024-11-27 07:27:59.922450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.976 [2024-11-27 07:27:59.922457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.976 [2024-11-27 07:27:59.931253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.976 [2024-11-27 07:27:59.931272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.976 [2024-11-27 07:27:59.931278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:48.976 [2024-11-27 07:27:59.935892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.976 [2024-11-27 07:27:59.935910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.976 [2024-11-27 07:27:59.935916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:48.976 [2024-11-27 07:27:59.940765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.976 [2024-11-27 07:27:59.940783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.976 [2024-11-27 07:27:59.940789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.976 [2024-11-27 07:27:59.947734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.976 [2024-11-27 07:27:59.947752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.976 [2024-11-27 07:27:59.947758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.976 [2024-11-27 07:27:59.956113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.976 [2024-11-27 07:27:59.956132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.976 [2024-11-27 07:27:59.956142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:48.976 [2024-11-27 07:27:59.961391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.976 [2024-11-27 07:27:59.961409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.976 [2024-11-27 07:27:59.961415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:48.976 [2024-11-27 07:27:59.971632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.976 [2024-11-27 07:27:59.971651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.976 [2024-11-27 07:27:59.971658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.976 [2024-11-27 07:27:59.980794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.976 [2024-11-27 07:27:59.980814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.976 [2024-11-27 07:27:59.980820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.976 [2024-11-27 07:27:59.986909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.976 [2024-11-27 07:27:59.986928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.976 [2024-11-27 07:27:59.986934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:48.976 [2024-11-27 07:27:59.991449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.976 [2024-11-27 07:27:59.991468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.976 [2024-11-27 07:27:59.991474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:48.976 [2024-11-27 07:27:59.996045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.976 [2024-11-27 07:27:59.996064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.976 [2024-11-27 07:27:59.996070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.976 [2024-11-27 07:28:00.000643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.976 [2024-11-27 07:28:00.000661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.976 [2024-11-27 07:28:00.000668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.976 [2024-11-27 07:28:00.006844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.976 [2024-11-27 07:28:00.006864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.976 [2024-11-27 07:28:00.006870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:48.976 [2024-11-27 07:28:00.014777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.976 [2024-11-27 07:28:00.014800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.976 [2024-11-27 07:28:00.014807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:48.976 [2024-11-27 07:28:00.021644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.976 [2024-11-27 07:28:00.021662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.976 [2024-11-27 07:28:00.021669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.976 [2024-11-27 07:28:00.027944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.976 [2024-11-27 07:28:00.027962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.976 [2024-11-27 07:28:00.027969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.976 [2024-11-27 07:28:00.035105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.976 [2024-11-27 07:28:00.035123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.976 [2024-11-27 07:28:00.035130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:48.976 [2024-11-27 07:28:00.040786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.976 [2024-11-27 07:28:00.040804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.976 [2024-11-27 07:28:00.040811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:48.976 [2024-11-27 07:28:00.047946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.976 [2024-11-27 07:28:00.047965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.976 [2024-11-27 07:28:00.047971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.976 [2024-11-27 07:28:00.052551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.976 [2024-11-27 07:28:00.052570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.976 [2024-11-27 07:28:00.052577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.976 [2024-11-27 07:28:00.057323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.976 [2024-11-27 07:28:00.057342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.976 [2024-11-27 07:28:00.057349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:48.976 [2024-11-27 07:28:00.062261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.976 [2024-11-27 07:28:00.062279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.976 [2024-11-27 07:28:00.062286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:48.976 [2024-11-27 07:28:00.071502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.976 [2024-11-27 07:28:00.071520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.976 [2024-11-27 07:28:00.071526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.976 [2024-11-27 07:28:00.081999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.977 [2024-11-27 07:28:00.082020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.977 [2024-11-27 07:28:00.082030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.977 [2024-11-27 07:28:00.090695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.977 [2024-11-27 07:28:00.090714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.977 [2024-11-27 07:28:00.090721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:48.977 [2024-11-27 07:28:00.100094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.977 [2024-11-27 07:28:00.100113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.977 [2024-11-27 07:28:00.100119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:48.977 [2024-11-27 07:28:00.109411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.977 [2024-11-27 07:28:00.109429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.977 [2024-11-27 07:28:00.109436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.977 [2024-11-27 07:28:00.115607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.977 [2024-11-27 07:28:00.115625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.977 [2024-11-27 07:28:00.115632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.977 [2024-11-27 07:28:00.124788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.977 [2024-11-27 07:28:00.124807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.977 [2024-11-27 07:28:00.124813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:48.977 [2024-11-27 07:28:00.129320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.977 [2024-11-27 07:28:00.129338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.977 [2024-11-27 07:28:00.129344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:48.977 [2024-11-27 07:28:00.135582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.977 [2024-11-27 07:28:00.135601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.977 [2024-11-27 07:28:00.135611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.977 [2024-11-27 07:28:00.140143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.977 [2024-11-27 07:28:00.140166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.977 [2024-11-27 07:28:00.140173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.977 [2024-11-27 07:28:00.148321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.977 [2024-11-27 07:28:00.148341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.977 [2024-11-27 07:28:00.148347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:48.977 [2024-11-27 07:28:00.156502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.977 [2024-11-27 07:28:00.156520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.977 [2024-11-27 07:28:00.156527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:48.977 [2024-11-27 07:28:00.165854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.977 [2024-11-27 07:28:00.165873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.977 [2024-11-27 07:28:00.165879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:48.977 [2024-11-27 07:28:00.170293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.977 [2024-11-27 07:28:00.170310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.977 [2024-11-27 07:28:00.170316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:48.977 [2024-11-27 07:28:00.177528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:48.977 [2024-11-27 07:28:00.177546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.977 [2024-11-27 07:28:00.177552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.239 [2024-11-27 07:28:00.182702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.239 [2024-11-27 07:28:00.182721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.239 [2024-11-27 07:28:00.182728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.239 [2024-11-27 07:28:00.187863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.239 [2024-11-27 07:28:00.187882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.239 [2024-11-27 07:28:00.187889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.239 [2024-11-27 07:28:00.195117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.239 [2024-11-27 07:28:00.195139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.239 [2024-11-27 07:28:00.195146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.239 [2024-11-27 07:28:00.203198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.239 [2024-11-27 07:28:00.203216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.239 [2024-11-27 07:28:00.203223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.239 [2024-11-27 07:28:00.214413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.239 [2024-11-27 07:28:00.214432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.239 [2024-11-27 07:28:00.214438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.239 [2024-11-27 07:28:00.223613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.239 [2024-11-27 07:28:00.223632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.239 [2024-11-27 07:28:00.223638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.239 [2024-11-27 07:28:00.230993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.239 [2024-11-27 07:28:00.231012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.239 [2024-11-27 07:28:00.231018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.240 [2024-11-27 07:28:00.238834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.240 [2024-11-27 07:28:00.238853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.240 [2024-11-27 07:28:00.238860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.240 [2024-11-27 07:28:00.249655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.240 [2024-11-27 07:28:00.249673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.240 [2024-11-27 07:28:00.249679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.240 [2024-11-27 07:28:00.259500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.240 [2024-11-27 07:28:00.259518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.240 [2024-11-27 07:28:00.259525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.240 [2024-11-27 07:28:00.267274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.240 [2024-11-27 07:28:00.267293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.240 [2024-11-27 07:28:00.267300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.240 [2024-11-27 07:28:00.279156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.240 [2024-11-27 07:28:00.279178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.240 [2024-11-27 07:28:00.279185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.240 [2024-11-27 07:28:00.290503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.240 [2024-11-27 07:28:00.290522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.240 [2024-11-27 07:28:00.290528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.240 [2024-11-27 07:28:00.302008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.240 [2024-11-27 07:28:00.302025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.240 [2024-11-27 07:28:00.302032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.240 [2024-11-27 07:28:00.310115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.240 [2024-11-27 07:28:00.310134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.240 [2024-11-27 07:28:00.310140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.240 [2024-11-27 07:28:00.319933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.240 [2024-11-27 07:28:00.319951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.240 [2024-11-27 07:28:00.319957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.240 [2024-11-27 07:28:00.328384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.240 [2024-11-27 07:28:00.328403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.240 [2024-11-27 07:28:00.328410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.240 [2024-11-27 07:28:00.339481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.240 [2024-11-27 07:28:00.339499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.240 [2024-11-27 07:28:00.339506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.240 [2024-11-27 07:28:00.351369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.240 [2024-11-27 07:28:00.351388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.240 [2024-11-27 07:28:00.351394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.240 [2024-11-27 07:28:00.361316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.240 [2024-11-27 07:28:00.361335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.240 [2024-11-27 07:28:00.361345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.240 [2024-11-27 07:28:00.370904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.240 [2024-11-27 07:28:00.370922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.240 [2024-11-27 07:28:00.370928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.240 [2024-11-27 07:28:00.380803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.240 [2024-11-27 07:28:00.380822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.240 [2024-11-27 07:28:00.380828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.240 [2024-11-27 07:28:00.389811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.240 [2024-11-27 07:28:00.389830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.240 [2024-11-27 07:28:00.389836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.240 [2024-11-27 07:28:00.398702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.240 [2024-11-27 07:28:00.398721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.240 [2024-11-27 07:28:00.398727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.240 [2024-11-27 07:28:00.409274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.240 [2024-11-27 07:28:00.409293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.240 [2024-11-27 07:28:00.409299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.240 [2024-11-27 07:28:00.421064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.240 [2024-11-27 07:28:00.421083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.240 [2024-11-27 07:28:00.421089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.240 [2024-11-27 07:28:00.433357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.240 [2024-11-27 07:28:00.433376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.240 [2024-11-27 07:28:00.433382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.502 [2024-11-27 07:28:00.444864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.502 [2024-11-27 07:28:00.444883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.502 [2024-11-27 07:28:00.444889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.502 [2024-11-27 07:28:00.456656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.502 [2024-11-27 07:28:00.456678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.502 [2024-11-27 07:28:00.456686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.502 [2024-11-27 07:28:00.468410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.502 [2024-11-27 07:28:00.468428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.502 [2024-11-27 07:28:00.468434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.502 [2024-11-27 07:28:00.479243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.502 [2024-11-27 07:28:00.479262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.502 [2024-11-27 07:28:00.479268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.502 [2024-11-27 07:28:00.491279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.502 [2024-11-27 07:28:00.491298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.502 [2024-11-27 07:28:00.491304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.502 [2024-11-27 07:28:00.503433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.502 [2024-11-27 07:28:00.503451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.502 [2024-11-27 07:28:00.503457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.502 [2024-11-27 07:28:00.513413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.502 [2024-11-27 07:28:00.513430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.502 [2024-11-27 07:28:00.513436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.502 [2024-11-27 07:28:00.520554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.502 [2024-11-27 07:28:00.520572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.502 [2024-11-27 07:28:00.520578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.502 [2024-11-27 07:28:00.525823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.502 [2024-11-27 07:28:00.525842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.502 [2024-11-27 07:28:00.525848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.502 [2024-11-27 07:28:00.538147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.502 [2024-11-27 07:28:00.538169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.502 [2024-11-27 07:28:00.538175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.502 [2024-11-27 07:28:00.549917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.502 [2024-11-27 07:28:00.549935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.502 [2024-11-27 07:28:00.549941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.502 [2024-11-27 07:28:00.560865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.502 [2024-11-27 07:28:00.560883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.502 [2024-11-27 07:28:00.560889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.502 [2024-11-27 07:28:00.569605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.502 [2024-11-27 07:28:00.569623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.502 [2024-11-27 07:28:00.569629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.502 [2024-11-27 07:28:00.577598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.502 [2024-11-27 07:28:00.577616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.502 [2024-11-27 07:28:00.577623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.502 [2024-11-27 07:28:00.586370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.502 [2024-11-27 07:28:00.586389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.502 [2024-11-27 07:28:00.586395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.502 [2024-11-27 07:28:00.597459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.502 [2024-11-27 07:28:00.597477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.502 [2024-11-27 07:28:00.597484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.502 [2024-11-27 07:28:00.608762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.502 [2024-11-27 07:28:00.608780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.502 [2024-11-27 07:28:00.608786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.502 [2024-11-27 07:28:00.617609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.502 [2024-11-27 07:28:00.617628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.502 [2024-11-27 07:28:00.617634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.502 [2024-11-27 07:28:00.627881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.502 [2024-11-27 07:28:00.627900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.502 [2024-11-27 07:28:00.627909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.502 [2024-11-27 07:28:00.638317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.502 [2024-11-27 07:28:00.638336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.502 [2024-11-27 07:28:00.638342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.502 [2024-11-27 07:28:00.650043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.503 [2024-11-27 07:28:00.650061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.503 [2024-11-27 07:28:00.650069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.503 [2024-11-27 07:28:00.660987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.503 [2024-11-27 07:28:00.661004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.503 [2024-11-27 07:28:00.661011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.503 [2024-11-27 07:28:00.671174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.503 [2024-11-27 07:28:00.671193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.503 [2024-11-27 07:28:00.671199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.503 [2024-11-27 07:28:00.679702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.503 [2024-11-27 07:28:00.679722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.503 [2024-11-27 07:28:00.679728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.503 [2024-11-27 07:28:00.689597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.503 [2024-11-27 07:28:00.689615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.503 [2024-11-27 07:28:00.689621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.503 [2024-11-27 07:28:00.698602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.503 [2024-11-27 07:28:00.698620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.503 [2024-11-27 07:28:00.698626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.503 [2024-11-27 07:28:00.702959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.503 [2024-11-27 07:28:00.702977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.503 [2024-11-27 07:28:00.702983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.763 [2024-11-27 07:28:00.711473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.763 [2024-11-27 07:28:00.711495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.763 [2024-11-27 07:28:00.711501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.763 [2024-11-27 07:28:00.721319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.763 [2024-11-27 07:28:00.721337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.763 [2024-11-27 07:28:00.721344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.763 [2024-11-27 07:28:00.732646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.763 [2024-11-27 07:28:00.732664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.763 [2024-11-27 07:28:00.732671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.763 [2024-11-27 07:28:00.741128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.763 [2024-11-27 07:28:00.741147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.763 [2024-11-27 07:28:00.741153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.763 [2024-11-27 07:28:00.751956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.763 [2024-11-27 07:28:00.751974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.763 [2024-11-27 07:28:00.751981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.763 [2024-11-27 07:28:00.762250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.763 [2024-11-27 07:28:00.762270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.763 [2024-11-27 07:28:00.762276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.763 [2024-11-27 07:28:00.772123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.764 [2024-11-27 07:28:00.772142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.764 [2024-11-27 07:28:00.772148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.764 [2024-11-27 07:28:00.783320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.764 [2024-11-27 07:28:00.783339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.764 [2024-11-27 07:28:00.783345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.764 [2024-11-27 07:28:00.794944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.764 [2024-11-27 07:28:00.794962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.764 [2024-11-27 07:28:00.794968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.764 [2024-11-27 07:28:00.806596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.764 [2024-11-27 07:28:00.806614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.764 [2024-11-27 07:28:00.806620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.764 [2024-11-27 07:28:00.818315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.764 [2024-11-27 07:28:00.818334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.764 [2024-11-27 07:28:00.818341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.764 [2024-11-27 07:28:00.829903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.764 [2024-11-27 07:28:00.829920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.764 [2024-11-27 07:28:00.829926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.764 [2024-11-27 07:28:00.839713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.764 [2024-11-27 07:28:00.839732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.764 [2024-11-27 07:28:00.839738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.764 [2024-11-27 07:28:00.847641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.764 [2024-11-27 07:28:00.847660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.764 [2024-11-27 07:28:00.847666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.764 [2024-11-27 07:28:00.856412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.764 [2024-11-27 07:28:00.856431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.764 [2024-11-27 07:28:00.856437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.764 [2024-11-27 07:28:00.866318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.764 [2024-11-27 07:28:00.866336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.764 [2024-11-27 07:28:00.866342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.764 3920.50 IOPS, 490.06 MiB/s [2024-11-27T06:28:00.969Z] [2024-11-27 07:28:00.878527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x96b570) 00:32:49.764 [2024-11-27 07:28:00.878545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.764 [2024-11-27 07:28:00.878551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.764 00:32:49.764 Latency(us) 00:32:49.764 [2024-11-27T06:28:00.969Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:49.764 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:49.764 nvme0n1 : 2.01 3918.34 489.79 0.00 0.00 4079.03 638.29 18350.08 00:32:49.764 [2024-11-27T06:28:00.969Z] =================================================================================================================== 00:32:49.764 [2024-11-27T06:28:00.969Z] Total : 3918.34 489.79 0.00 0.00 4079.03 638.29 18350.08 00:32:49.764 { 00:32:49.764 "results": [ 00:32:49.764 { 00:32:49.764 "job": "nvme0n1", 00:32:49.764 "core_mask": "0x2", 00:32:49.764 "workload": "randread", 00:32:49.764 "status": "finished", 00:32:49.764 "queue_depth": 16, 00:32:49.764 "io_size": 131072, 00:32:49.764 "runtime": 2.005185, 00:32:49.764 "iops": 3918.3416991449667, 00:32:49.764 "mibps": 489.79271239312084, 00:32:49.764 "io_failed": 0, 00:32:49.764 "io_timeout": 0, 00:32:49.764 "avg_latency_us": 4079.0254329472655, 00:32:49.764 "min_latency_us": 638.2933333333333, 00:32:49.764 "max_latency_us": 18350.08 00:32:49.764 } 00:32:49.764 ], 00:32:49.764 "core_count": 1 00:32:49.764 } 00:32:49.764 07:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:49.764 07:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:49.764 07:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:49.764 | .driver_specific 00:32:49.764 | .nvme_error 00:32:49.764 | .status_code 00:32:49.764 | .command_transient_transport_error' 00:32:49.764 07:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:50.025 07:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 254 > 0 )) 00:32:50.025 07:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2571702 00:32:50.025 07:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2571702 ']' 00:32:50.025 07:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2571702 00:32:50.025 07:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:32:50.025 07:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:50.025 07:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2571702 00:32:50.025 07:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:50.025 07:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:50.025 07:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2571702' 00:32:50.025 killing process with pid 2571702 00:32:50.025 07:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2571702 00:32:50.025 Received shutdown signal, test time was about 2.000000 seconds 00:32:50.025 00:32:50.025 Latency(us) 00:32:50.025 [2024-11-27T06:28:01.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:50.025 [2024-11-27T06:28:01.230Z] =================================================================================================================== 00:32:50.025 [2024-11-27T06:28:01.230Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:50.025 07:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2571702 00:32:50.286 07:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:32:50.286 07:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:50.286 07:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:32:50.286 07:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:32:50.286 07:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:32:50.286 07:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2572385 00:32:50.286 07:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2572385 /var/tmp/bperf.sock 00:32:50.286 07:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2572385 ']' 00:32:50.286 07:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:32:50.286 07:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:50.286 07:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:50.286 07:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:50.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:50.286 07:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:50.286 07:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:50.286 [2024-11-27 07:28:01.305598] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:32:50.286 [2024-11-27 07:28:01.305654] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2572385 ] 00:32:50.286 [2024-11-27 07:28:01.389626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:50.286 [2024-11-27 07:28:01.419231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:51.228 07:28:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:51.228 07:28:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:32:51.228 07:28:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:51.228 07:28:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:51.229 07:28:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:51.229 07:28:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.229 07:28:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:51.229 07:28:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.229 07:28:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:51.229 07:28:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:51.489 nvme0n1 00:32:51.489 07:28:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:51.489 07:28:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.489 07:28:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:51.489 07:28:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.489 07:28:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:51.489 07:28:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:51.489 Running I/O for 2 seconds... 00:32:51.749 [2024-11-27 07:28:02.706873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef46d0 00:32:51.749 [2024-11-27 07:28:02.707723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.749 [2024-11-27 07:28:02.707750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:51.750 [2024-11-27 07:28:02.715651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eeaef0 00:32:51.750 [2024-11-27 07:28:02.716493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.750 [2024-11-27 07:28:02.716513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:51.750 [2024-11-27 07:28:02.724113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee1710 00:32:51.750 [2024-11-27 07:28:02.724912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.750 [2024-11-27 07:28:02.724929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:51.750 [2024-11-27 07:28:02.732566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee38d0 00:32:51.750 [2024-11-27 07:28:02.733404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.750 [2024-11-27 07:28:02.733421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:51.750 [2024-11-27 07:28:02.741010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef35f0 00:32:51.750 [2024-11-27 07:28:02.741850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.750 [2024-11-27 07:28:02.741867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:51.750 [2024-11-27 07:28:02.749440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef57b0 00:32:51.750 [2024-11-27 07:28:02.750270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.750 [2024-11-27 07:28:02.750286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:51.750 [2024-11-27 07:28:02.757863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef7970 00:32:51.750 [2024-11-27 07:28:02.758712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.750 [2024-11-27 07:28:02.758727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:51.750 [2024-11-27 07:28:02.766316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef9b30 00:32:51.750 [2024-11-27 07:28:02.767121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.750 [2024-11-27 07:28:02.767137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:51.750 [2024-11-27 07:28:02.774743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eee190 00:32:51.750 [2024-11-27 07:28:02.775579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.750 [2024-11-27 07:28:02.775603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:51.750 [2024-11-27 07:28:02.783188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eebfd0 00:32:51.750 [2024-11-27 07:28:02.784034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.750 [2024-11-27 07:28:02.784050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:51.750 [2024-11-27 07:28:02.791592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee9e10 00:32:51.750 [2024-11-27 07:28:02.792411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.750 [2024-11-27 07:28:02.792427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:51.750 [2024-11-27 07:28:02.800010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee27f0 00:32:51.750 [2024-11-27 07:28:02.800849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.750 [2024-11-27 07:28:02.800865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:51.750 [2024-11-27 07:28:02.808420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee49b0 00:32:51.750 [2024-11-27 07:28:02.809241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.750 [2024-11-27 07:28:02.809257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:51.750 [2024-11-27 07:28:02.816808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef46d0 00:32:51.750 [2024-11-27 07:28:02.817602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.750 [2024-11-27 07:28:02.817618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:51.750 [2024-11-27 07:28:02.825209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef6890 00:32:51.750 [2024-11-27 07:28:02.826042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.750 [2024-11-27 07:28:02.826058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:51.750 [2024-11-27 07:28:02.833613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef8a50 00:32:51.750 [2024-11-27 07:28:02.834442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.750 [2024-11-27 07:28:02.834457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:51.750 [2024-11-27 07:28:02.842029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016efac10 00:32:51.750 [2024-11-27 07:28:02.842863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.750 [2024-11-27 07:28:02.842879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:51.750 [2024-11-27 07:28:02.850424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eed0b0 00:32:51.750 [2024-11-27 07:28:02.851262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.750 [2024-11-27 07:28:02.851278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:51.750 [2024-11-27 07:28:02.858836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eeaef0 00:32:51.750 [2024-11-27 07:28:02.859663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.750 [2024-11-27 07:28:02.859679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:51.750 [2024-11-27 07:28:02.867233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee1710 00:32:51.750 [2024-11-27 07:28:02.868072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.750 [2024-11-27 07:28:02.868087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:51.750 [2024-11-27 07:28:02.875625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee38d0 00:32:51.750 [2024-11-27 07:28:02.876440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.750 [2024-11-27 07:28:02.876456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:51.750 [2024-11-27 07:28:02.884028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef35f0 00:32:51.750 [2024-11-27 07:28:02.884884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.750 [2024-11-27 07:28:02.884900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:51.750 [2024-11-27 07:28:02.892444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef57b0 00:32:51.750 [2024-11-27 07:28:02.893268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.750 [2024-11-27 07:28:02.893283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:51.750 [2024-11-27 07:28:02.900858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef7970 00:32:51.750 [2024-11-27 07:28:02.901660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.750 [2024-11-27 07:28:02.901676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:51.750 [2024-11-27 07:28:02.909304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef9b30 00:32:51.750 [2024-11-27 07:28:02.910098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.750 [2024-11-27 07:28:02.910114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:51.750 [2024-11-27 07:28:02.917685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eee190 00:32:51.750 [2024-11-27 07:28:02.918538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.750 [2024-11-27 07:28:02.918553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:51.750 [2024-11-27 07:28:02.926073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eebfd0 00:32:51.750 [2024-11-27 07:28:02.926860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.750 [2024-11-27 07:28:02.926876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:51.750 [2024-11-27 07:28:02.934467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee9e10 00:32:51.750 [2024-11-27 07:28:02.935298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.751 [2024-11-27 07:28:02.935313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:51.751 [2024-11-27 07:28:02.942877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee27f0 00:32:51.751 [2024-11-27 07:28:02.943708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.751 [2024-11-27 07:28:02.943724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:51.751 [2024-11-27 07:28:02.951275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee49b0 00:32:51.751 [2024-11-27 07:28:02.952090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.751 [2024-11-27 07:28:02.952105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:52.012 [2024-11-27 07:28:02.959677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef46d0 00:32:52.012 [2024-11-27 07:28:02.960513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.012 [2024-11-27 07:28:02.960529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:52.012 [2024-11-27 07:28:02.968084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef6890 00:32:52.012 [2024-11-27 07:28:02.968935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:10380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.012 [2024-11-27 07:28:02.968952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:52.012 [2024-11-27 07:28:02.976494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef8a50 00:32:52.012 [2024-11-27 07:28:02.977331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.012 [2024-11-27 07:28:02.977347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:52.012 [2024-11-27 07:28:02.984902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016efac10 00:32:52.012 [2024-11-27 07:28:02.985725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.012 [2024-11-27 07:28:02.985741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:52.012 [2024-11-27 07:28:02.993315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eed0b0 00:32:52.012 [2024-11-27 07:28:02.994146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.012 [2024-11-27 07:28:02.994167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:52.012 [2024-11-27 07:28:03.001702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eeaef0 00:32:52.012 [2024-11-27 07:28:03.002546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.012 [2024-11-27 07:28:03.002562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:52.012 [2024-11-27 07:28:03.010078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee1710 00:32:52.012 [2024-11-27 07:28:03.010932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.012 [2024-11-27 07:28:03.010948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:52.012 [2024-11-27 07:28:03.018470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee38d0 00:32:52.012 [2024-11-27 07:28:03.019318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.012 [2024-11-27 07:28:03.019334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:52.012 [2024-11-27 07:28:03.026907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef35f0 00:32:52.012 [2024-11-27 07:28:03.027761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.012 [2024-11-27 07:28:03.027776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:52.012 [2024-11-27 07:28:03.035324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef57b0 00:32:52.012 [2024-11-27 07:28:03.036166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.012 [2024-11-27 07:28:03.036182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:52.012 [2024-11-27 07:28:03.043727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef7970 00:32:52.012 [2024-11-27 07:28:03.044559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.012 [2024-11-27 07:28:03.044575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:52.012 [2024-11-27 07:28:03.052109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef9b30 00:32:52.012 [2024-11-27 07:28:03.052938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.012 [2024-11-27 07:28:03.052954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:52.012 [2024-11-27 07:28:03.060506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eee190 00:32:52.012 [2024-11-27 07:28:03.061339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.013 [2024-11-27 07:28:03.061355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:52.013 [2024-11-27 07:28:03.068899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eebfd0 00:32:52.013 [2024-11-27 07:28:03.069732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:10897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.013 [2024-11-27 07:28:03.069748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:52.013 [2024-11-27 07:28:03.077311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee9e10 00:32:52.013 [2024-11-27 07:28:03.078130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.013 [2024-11-27 07:28:03.078146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:52.013 [2024-11-27 07:28:03.085701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee27f0 00:32:52.013 [2024-11-27 07:28:03.086535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.013 [2024-11-27 07:28:03.086550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:52.013 [2024-11-27 07:28:03.094102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee49b0 00:32:52.013 [2024-11-27 07:28:03.094954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.013 [2024-11-27 07:28:03.094970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:52.013 [2024-11-27 07:28:03.102485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef46d0 00:32:52.013 [2024-11-27 07:28:03.103310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.013 [2024-11-27 07:28:03.103326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:52.013 [2024-11-27 07:28:03.110875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef6890 00:32:52.013 [2024-11-27 07:28:03.111726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.013 [2024-11-27 07:28:03.111742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:52.013 [2024-11-27 07:28:03.119280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef8a50 00:32:52.013 [2024-11-27 07:28:03.120133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.013 [2024-11-27 07:28:03.120149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:52.013 [2024-11-27 07:28:03.127681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016efac10 00:32:52.013 [2024-11-27 07:28:03.128532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.013 [2024-11-27 07:28:03.128548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:52.013 [2024-11-27 07:28:03.136097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eed0b0 00:32:52.013 [2024-11-27 07:28:03.136930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.013 [2024-11-27 07:28:03.136946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:52.013 [2024-11-27 07:28:03.144483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eeaef0 00:32:52.013 [2024-11-27 07:28:03.145326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.013 [2024-11-27 07:28:03.145342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:52.013 [2024-11-27 07:28:03.152875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee1710 00:32:52.013 [2024-11-27 07:28:03.153684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.013 [2024-11-27 07:28:03.153700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:52.013 [2024-11-27 07:28:03.161283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee38d0 00:32:52.013 [2024-11-27 07:28:03.162121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.013 [2024-11-27 07:28:03.162137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:52.013 [2024-11-27 07:28:03.169677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef35f0 00:32:52.013 [2024-11-27 07:28:03.170527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.013 [2024-11-27 07:28:03.170542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:52.013 [2024-11-27 07:28:03.178080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef57b0 00:32:52.013 [2024-11-27 07:28:03.178927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.013 [2024-11-27 07:28:03.178943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:52.013 [2024-11-27 07:28:03.186471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef7970 00:32:52.013 [2024-11-27 07:28:03.187320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.013 [2024-11-27 07:28:03.187336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:52.013 [2024-11-27 07:28:03.194327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016efc560 00:32:52.013 [2024-11-27 07:28:03.195143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.013 [2024-11-27 07:28:03.195161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:52.013 [2024-11-27 07:28:03.203705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee0ea0 00:32:52.013 [2024-11-27 07:28:03.204648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.013 [2024-11-27 07:28:03.204664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:52.013 [2024-11-27 07:28:03.212114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016edfdc0 00:32:52.013 [2024-11-27 07:28:03.213065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.013 [2024-11-27 07:28:03.213084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:52.275 [2024-11-27 07:28:03.220528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef0350 00:32:52.275 [2024-11-27 07:28:03.221475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.275 [2024-11-27 07:28:03.221491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:52.275 [2024-11-27 07:28:03.228933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef1430 00:32:52.275 [2024-11-27 07:28:03.229885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.275 [2024-11-27 07:28:03.229901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:52.275 [2024-11-27 07:28:03.238370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef2510 00:32:52.275 [2024-11-27 07:28:03.239761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.275 [2024-11-27 07:28:03.239776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:52.275 [2024-11-27 07:28:03.246286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016efc560 00:32:52.275 [2024-11-27 07:28:03.247374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.275 [2024-11-27 07:28:03.247390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:52.275 [2024-11-27 07:28:03.254656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee73e0 00:32:52.275 [2024-11-27 07:28:03.255746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.275 [2024-11-27 07:28:03.255761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:52.275 [2024-11-27 07:28:03.263120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee4140 00:32:52.275 [2024-11-27 07:28:03.264201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.275 [2024-11-27 07:28:03.264216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:52.275 [2024-11-27 07:28:03.271539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef9f68 00:32:52.275 [2024-11-27 07:28:03.272588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.275 [2024-11-27 07:28:03.272603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:52.275 [2024-11-27 07:28:03.279965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016efb480 00:32:52.275 [2024-11-27 07:28:03.281050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.275 [2024-11-27 07:28:03.281065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:52.275 [2024-11-27 07:28:03.288420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee84c0 00:32:52.275 [2024-11-27 07:28:03.289510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.275 [2024-11-27 07:28:03.289525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:52.275 [2024-11-27 07:28:03.296887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee3060 00:32:52.275 [2024-11-27 07:28:03.297986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.275 [2024-11-27 07:28:03.298001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:52.275 [2024-11-27 07:28:03.305345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016efb048 00:32:52.275 [2024-11-27 07:28:03.306434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.275 [2024-11-27 07:28:03.306449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:52.275 [2024-11-27 07:28:03.313800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef6cc8 00:32:52.275 [2024-11-27 07:28:03.314894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.275 [2024-11-27 07:28:03.314910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:52.275 [2024-11-27 07:28:03.322243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016efc560 00:32:52.275 [2024-11-27 07:28:03.323348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.275 [2024-11-27 07:28:03.323363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:52.275 [2024-11-27 07:28:03.330698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee73e0 00:32:52.275 [2024-11-27 07:28:03.331785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.275 [2024-11-27 07:28:03.331801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:52.275 [2024-11-27 07:28:03.339136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee4140 00:32:52.275 [2024-11-27 07:28:03.340188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.275 [2024-11-27 07:28:03.340204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:52.275 [2024-11-27 07:28:03.347572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef9f68 00:32:52.275 [2024-11-27 07:28:03.348658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.275 [2024-11-27 07:28:03.348673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:52.275 [2024-11-27 07:28:03.356034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016efb480 00:32:52.275 [2024-11-27 07:28:03.357123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.275 [2024-11-27 07:28:03.357139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:52.275 [2024-11-27 07:28:03.364509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee84c0 00:32:52.275 [2024-11-27 07:28:03.365573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.275 [2024-11-27 07:28:03.365589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:52.275 [2024-11-27 07:28:03.372942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee3060 00:32:52.275 [2024-11-27 07:28:03.374021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.275 [2024-11-27 07:28:03.374037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:52.275 [2024-11-27 07:28:03.380746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee0a68 00:32:52.275 [2024-11-27 07:28:03.381999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.275 [2024-11-27 07:28:03.382015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:52.275 [2024-11-27 07:28:03.388527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee3498 00:32:52.275 [2024-11-27 07:28:03.389225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.275 [2024-11-27 07:28:03.389241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:52.275 [2024-11-27 07:28:03.397073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee4578 00:32:52.275 [2024-11-27 07:28:03.397787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.275 [2024-11-27 07:28:03.397803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:52.275 [2024-11-27 07:28:03.405494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eef6a8 00:32:52.275 [2024-11-27 07:28:03.406202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.276 [2024-11-27 07:28:03.406217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:52.276 [2024-11-27 07:28:03.413924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef46d0 00:32:52.276 [2024-11-27 07:28:03.414596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.276 [2024-11-27 07:28:03.414612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:52.276 [2024-11-27 07:28:03.422361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef57b0 00:32:52.276 [2024-11-27 07:28:03.423060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.276 [2024-11-27 07:28:03.423075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:52.276 [2024-11-27 07:28:03.430768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee6738 00:32:52.276 [2024-11-27 07:28:03.431453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.276 [2024-11-27 07:28:03.431471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:52.276 [2024-11-27 07:28:03.439187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef9b30 00:32:52.276 [2024-11-27 07:28:03.439892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.276 [2024-11-27 07:28:03.439907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:52.276 [2024-11-27 07:28:03.447602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016efd640 00:32:52.276 [2024-11-27 07:28:03.448319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.276 [2024-11-27 07:28:03.448334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:52.276 [2024-11-27 07:28:03.456024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eec408 00:32:52.276 [2024-11-27 07:28:03.456740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.276 [2024-11-27 07:28:03.456756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:52.276 [2024-11-27 07:28:03.464435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016efc998 00:32:52.276 [2024-11-27 07:28:03.465163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.276 [2024-11-27 07:28:03.465179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:52.276 [2024-11-27 07:28:03.472871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eee5c8 00:32:52.276 [2024-11-27 07:28:03.473581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.276 [2024-11-27 07:28:03.473598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:52.537 [2024-11-27 07:28:03.481552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eeff18 00:32:52.537 [2024-11-27 07:28:03.482365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.537 [2024-11-27 07:28:03.482381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:52.537 [2024-11-27 07:28:03.490131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef31b8 00:32:52.537 [2024-11-27 07:28:03.490957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.537 [2024-11-27 07:28:03.490973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:52.537 [2024-11-27 07:28:03.498557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef2948 00:32:52.537 [2024-11-27 07:28:03.499401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.537 [2024-11-27 07:28:03.499416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:52.537 [2024-11-27 07:28:03.506996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee99d8 00:32:52.537 [2024-11-27 07:28:03.507796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.537 [2024-11-27 07:28:03.507812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:52.537 [2024-11-27 07:28:03.515426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee1b48 00:32:52.538 [2024-11-27 07:28:03.516212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.538 [2024-11-27 07:28:03.516229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:52.538 [2024-11-27 07:28:03.523856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee2c28 00:32:52.538 [2024-11-27 07:28:03.524697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.538 [2024-11-27 07:28:03.524713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:52.538 [2024-11-27 07:28:03.532289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee3d08 00:32:52.538 [2024-11-27 07:28:03.533110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.538 [2024-11-27 07:28:03.533127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:52.538 [2024-11-27 07:28:03.541042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef96f8 00:32:52.538 [2024-11-27 07:28:03.541611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.538 [2024-11-27 07:28:03.541628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:52.538 [2024-11-27 07:28:03.549780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee1f80 00:32:52.538 [2024-11-27 07:28:03.550686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.538 [2024-11-27 07:28:03.550701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:52.538 [2024-11-27 07:28:03.558092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee6fa8 00:32:52.538 [2024-11-27 07:28:03.558998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.538 [2024-11-27 07:28:03.559013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:52.538 [2024-11-27 07:28:03.566634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eec840 00:32:52.538 [2024-11-27 07:28:03.567558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.538 [2024-11-27 07:28:03.567574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:52.538 [2024-11-27 07:28:03.575042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eed920 00:32:52.538 [2024-11-27 07:28:03.575974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.538 [2024-11-27 07:28:03.575990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:52.538 [2024-11-27 07:28:03.583477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eeea00 00:32:52.538 [2024-11-27 07:28:03.584411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.538 [2024-11-27 07:28:03.584428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:52.538 [2024-11-27 07:28:03.591901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee4de8 00:32:52.538 [2024-11-27 07:28:03.592835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.538 [2024-11-27 07:28:03.592850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:52.538 [2024-11-27 07:28:03.600581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee0ea0 00:32:52.538 [2024-11-27 07:28:03.601315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.538 [2024-11-27 07:28:03.601331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:52.538 [2024-11-27 07:28:03.609149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef1868 00:32:52.538 [2024-11-27 07:28:03.610147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.538 [2024-11-27 07:28:03.610168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:52.538 [2024-11-27 07:28:03.617820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef5378 00:32:52.538 [2024-11-27 07:28:03.618968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.538 [2024-11-27 07:28:03.618984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:52.538 [2024-11-27 07:28:03.626252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016efdeb0 00:32:52.538 [2024-11-27 07:28:03.627361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.538 [2024-11-27 07:28:03.627377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:52.538 [2024-11-27 07:28:03.633193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee4578 00:32:52.538 [2024-11-27 07:28:03.633867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.538 [2024-11-27 07:28:03.633882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:52.538 [2024-11-27 07:28:03.641603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eef6a8 00:32:52.538 [2024-11-27 07:28:03.642243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.538 [2024-11-27 07:28:03.642258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:52.538 [2024-11-27 07:28:03.649999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef46d0 00:32:52.538 [2024-11-27 07:28:03.650697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.538 [2024-11-27 07:28:03.650715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:52.538 [2024-11-27 07:28:03.659479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef57b0 00:32:52.538 [2024-11-27 07:28:03.660628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.538 [2024-11-27 07:28:03.660644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:52.538 [2024-11-27 07:28:03.666962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef35f0 00:32:52.538 [2024-11-27 07:28:03.667408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.538 [2024-11-27 07:28:03.667424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:52.538 [2024-11-27 07:28:03.675889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eebfd0 00:32:52.538 [2024-11-27 07:28:03.676695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.538 [2024-11-27 07:28:03.676711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:52.538 [2024-11-27 07:28:03.684244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016efa7d8 00:32:52.538 [2024-11-27 07:28:03.685040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.538 [2024-11-27 07:28:03.685055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:52.538 [2024-11-27 07:28:03.692669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016efd640 00:32:52.538 [2024-11-27 07:28:03.694441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.538 [2024-11-27 07:28:03.694458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:52.538 30034.00 IOPS, 117.32 MiB/s [2024-11-27T06:28:03.743Z] [2024-11-27 07:28:03.701057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef5be8 00:32:52.538 [2024-11-27 07:28:03.701855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.538 [2024-11-27 07:28:03.701871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:52.538 [2024-11-27 07:28:03.709470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef4b08 00:32:52.538 [2024-11-27 07:28:03.710239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.538 [2024-11-27 07:28:03.710255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:52.538 [2024-11-27 07:28:03.717902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef4f40 00:32:52.538 [2024-11-27 07:28:03.718694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.538 [2024-11-27 07:28:03.718710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:52.538 [2024-11-27 07:28:03.726332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef8e88 00:32:52.538 [2024-11-27 07:28:03.727096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.538 [2024-11-27 07:28:03.727112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:52.538 [2024-11-27 07:28:03.734770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef1430 00:32:52.538 [2024-11-27 07:28:03.735570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.538 [2024-11-27 07:28:03.735587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:52.801 [2024-11-27 07:28:03.743189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016edf118 00:32:52.801 [2024-11-27 07:28:03.743991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.801 [2024-11-27 07:28:03.744006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:52.801 [2024-11-27 07:28:03.751600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016efbcf0 00:32:52.801 [2024-11-27 07:28:03.752403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.801 [2024-11-27 07:28:03.752418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:52.801 [2024-11-27 07:28:03.760061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef31b8 00:32:52.801 [2024-11-27 07:28:03.760858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.801 [2024-11-27 07:28:03.760874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:52.801 [2024-11-27 07:28:03.768499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef2948 00:32:52.801 [2024-11-27 07:28:03.769287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.801 [2024-11-27 07:28:03.769303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:52.801 [2024-11-27 07:28:03.776922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee0ea0 00:32:52.801 [2024-11-27 07:28:03.777711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.801 [2024-11-27 07:28:03.777727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:52.801 [2024-11-27 07:28:03.785338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016edfdc0 00:32:52.801 [2024-11-27 07:28:03.785990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.801 [2024-11-27 07:28:03.786005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:52.801 [2024-11-27 07:28:03.793983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eea680 00:32:52.801 [2024-11-27 07:28:03.794887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.801 [2024-11-27 07:28:03.794903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:52.801 [2024-11-27 07:28:03.802550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eec840 00:32:52.801 [2024-11-27 07:28:03.803468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.801 [2024-11-27 07:28:03.803484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:52.801 [2024-11-27 07:28:03.810972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee6fa8 00:32:52.801 [2024-11-27 07:28:03.811903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.801 [2024-11-27 07:28:03.811920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:52.801 [2024-11-27 07:28:03.819408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eddc00 00:32:52.801 [2024-11-27 07:28:03.820314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.801 [2024-11-27 07:28:03.820330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:52.801 [2024-11-27 07:28:03.827851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eea248 00:32:52.801 [2024-11-27 07:28:03.828763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.801 [2024-11-27 07:28:03.828779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:52.801 [2024-11-27 07:28:03.836258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016efe720 00:32:52.801 [2024-11-27 07:28:03.837169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.801 [2024-11-27 07:28:03.837185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:52.801 [2024-11-27 07:28:03.844675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef0ff8 00:32:52.801 [2024-11-27 07:28:03.845550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.801 [2024-11-27 07:28:03.845566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:52.801 [2024-11-27 07:28:03.853085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016efdeb0 00:32:52.801 [2024-11-27 07:28:03.854000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.801 [2024-11-27 07:28:03.854016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:52.802 [2024-11-27 07:28:03.860954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016efef90 00:32:52.802 [2024-11-27 07:28:03.861859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.802 [2024-11-27 07:28:03.861874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:52.802 [2024-11-27 07:28:03.870409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eeb328 00:32:52.802 [2024-11-27 07:28:03.871441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.802 [2024-11-27 07:28:03.871460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:52.802 [2024-11-27 07:28:03.878251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016efd208 00:32:52.802 [2024-11-27 07:28:03.879267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.802 [2024-11-27 07:28:03.879282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:52.802 [2024-11-27 07:28:03.886080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef8618 00:32:52.802 [2024-11-27 07:28:03.886759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.802 [2024-11-27 07:28:03.886775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:52.802 [2024-11-27 07:28:03.894404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef7da8 00:32:52.802 [2024-11-27 07:28:03.895065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.802 [2024-11-27 07:28:03.895081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:52.802 [2024-11-27 07:28:03.902811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef6cc8 00:32:52.802 [2024-11-27 07:28:03.903459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.802 [2024-11-27 07:28:03.903474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:52.802 [2024-11-27 07:28:03.911233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016efac10 00:32:52.802 [2024-11-27 07:28:03.911904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.802 [2024-11-27 07:28:03.911920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:52.802 [2024-11-27 07:28:03.919639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee8d30 00:32:52.802 [2024-11-27 07:28:03.920317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.802 [2024-11-27 07:28:03.920333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:52.802 [2024-11-27 07:28:03.928044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee4578 00:32:52.802 [2024-11-27 07:28:03.928720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.802 [2024-11-27 07:28:03.928736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:52.802 [2024-11-27 07:28:03.937543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eef6a8 00:32:52.802 [2024-11-27 07:28:03.938671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.802 [2024-11-27 07:28:03.938687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:52.802 [2024-11-27 07:28:03.945398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016efda78 00:32:52.802 [2024-11-27 07:28:03.946196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.802 [2024-11-27 07:28:03.946213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:52.802 [2024-11-27 07:28:03.953752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee1f80 00:32:52.802 [2024-11-27 07:28:03.954551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.802 [2024-11-27 07:28:03.954566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:52.802 [2024-11-27 07:28:03.962178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef96f8 00:32:52.802 [2024-11-27 07:28:03.962960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.802 [2024-11-27 07:28:03.962976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:52.802 [2024-11-27 07:28:03.970575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef1868 00:32:52.802 [2024-11-27 07:28:03.971383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.802 [2024-11-27 07:28:03.971399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:52.802 [2024-11-27 07:28:03.978986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef8a50 00:32:52.802 [2024-11-27 07:28:03.979765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.802 [2024-11-27 07:28:03.979781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:52.802 [2024-11-27 07:28:03.987399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eff3c8 00:32:52.802 [2024-11-27 07:28:03.988181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.802 [2024-11-27 07:28:03.988196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:52.802 [2024-11-27 07:28:03.995816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eeea00 00:32:52.802 [2024-11-27 07:28:03.996597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.802 [2024-11-27 07:28:03.996612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.064 [2024-11-27 07:28:04.004244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee4de8 00:32:53.064 [2024-11-27 07:28:04.005048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.064 [2024-11-27 07:28:04.005063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.064 [2024-11-27 07:28:04.012665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eed0b0 00:32:53.064 [2024-11-27 07:28:04.013459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.064 [2024-11-27 07:28:04.013476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.064 [2024-11-27 07:28:04.021057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016efd208 00:32:53.064 [2024-11-27 07:28:04.021845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.064 [2024-11-27 07:28:04.021860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.064 [2024-11-27 07:28:04.029464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee3060 00:32:53.064 [2024-11-27 07:28:04.030234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.064 [2024-11-27 07:28:04.030250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.064 [2024-11-27 07:28:04.037892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eec408 00:32:53.064 [2024-11-27 07:28:04.038678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.064 [2024-11-27 07:28:04.038694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.064 [2024-11-27 07:28:04.046319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef81e0 00:32:53.064 [2024-11-27 07:28:04.047115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.064 [2024-11-27 07:28:04.047131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.064 [2024-11-27 07:28:04.054752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee7c50 00:32:53.064 [2024-11-27 07:28:04.055539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.064 [2024-11-27 07:28:04.055554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.064 [2024-11-27 07:28:04.063156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee6300 00:32:53.064 [2024-11-27 07:28:04.063950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.064 [2024-11-27 07:28:04.063965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.064 [2024-11-27 07:28:04.071571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eee190 00:32:53.064 [2024-11-27 07:28:04.072336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.064 [2024-11-27 07:28:04.072352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.064 [2024-11-27 07:28:04.079989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016efe720 00:32:53.064 [2024-11-27 07:28:04.080768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.064 [2024-11-27 07:28:04.080784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.064 [2024-11-27 07:28:04.088441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eea248 00:32:53.064 [2024-11-27 07:28:04.089247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.064 [2024-11-27 07:28:04.089266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.064 [2024-11-27 07:28:04.096857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eddc00 00:32:53.064 [2024-11-27 07:28:04.097640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.064 [2024-11-27 07:28:04.097656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.064 [2024-11-27 07:28:04.105274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee6fa8 00:32:53.064 [2024-11-27 07:28:04.106018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.064 [2024-11-27 07:28:04.106034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.064 [2024-11-27 07:28:04.113667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef8e88 00:32:53.064 [2024-11-27 07:28:04.114470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.064 [2024-11-27 07:28:04.114487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.064 [2024-11-27 07:28:04.122070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef4f40 00:32:53.064 [2024-11-27 07:28:04.122867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.064 [2024-11-27 07:28:04.122883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.064 [2024-11-27 07:28:04.130492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef4b08 00:32:53.064 [2024-11-27 07:28:04.131292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.065 [2024-11-27 07:28:04.131308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.065 [2024-11-27 07:28:04.138906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef92c0 00:32:53.065 [2024-11-27 07:28:04.139698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.065 [2024-11-27 07:28:04.139714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.065 [2024-11-27 07:28:04.147332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eef270 00:32:53.065 [2024-11-27 07:28:04.148113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.065 [2024-11-27 07:28:04.148128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.065 [2024-11-27 07:28:04.155728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016efd640 00:32:53.065 [2024-11-27 07:28:04.156526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.065 [2024-11-27 07:28:04.156542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.065 [2024-11-27 07:28:04.164166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016efa7d8 00:32:53.065 [2024-11-27 07:28:04.164970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.065 [2024-11-27 07:28:04.164986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.065 [2024-11-27 07:28:04.172596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eebfd0 00:32:53.065 [2024-11-27 07:28:04.173356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.065 [2024-11-27 07:28:04.173372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.065 [2024-11-27 07:28:04.181017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee2c28 00:32:53.065 [2024-11-27 07:28:04.181815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.065 [2024-11-27 07:28:04.181831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.065 [2024-11-27 07:28:04.189453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee3d08 00:32:53.065 [2024-11-27 07:28:04.190228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.065 [2024-11-27 07:28:04.190244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.065 [2024-11-27 07:28:04.197854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eed4e8 00:32:53.065 [2024-11-27 07:28:04.198653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.065 [2024-11-27 07:28:04.198669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.065 [2024-11-27 07:28:04.206265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee6738 00:32:53.065 [2024-11-27 07:28:04.207012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.065 [2024-11-27 07:28:04.207027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.065 [2024-11-27 07:28:04.214658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016efc998 00:32:53.065 [2024-11-27 07:28:04.215441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.065 [2024-11-27 07:28:04.215457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.065 [2024-11-27 07:28:04.223076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016efda78 00:32:53.065 [2024-11-27 07:28:04.223879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.065 [2024-11-27 07:28:04.223895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.065 [2024-11-27 07:28:04.231512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee1f80 00:32:53.065 [2024-11-27 07:28:04.232310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.065 [2024-11-27 07:28:04.232326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.065 [2024-11-27 07:28:04.239917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef96f8 00:32:53.065 [2024-11-27 07:28:04.240676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.065 [2024-11-27 07:28:04.240692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.065 [2024-11-27 07:28:04.248313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef1868 00:32:53.065 [2024-11-27 07:28:04.249112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.065 [2024-11-27 07:28:04.249126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.065 [2024-11-27 07:28:04.256720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef8a50 00:32:53.065 [2024-11-27 07:28:04.257461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.065 [2024-11-27 07:28:04.257477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.065 [2024-11-27 07:28:04.265148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eff3c8 00:32:53.065 [2024-11-27 07:28:04.265935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.065 [2024-11-27 07:28:04.265950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.328 [2024-11-27 07:28:04.273570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eeea00 00:32:53.328 [2024-11-27 07:28:04.274326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.328 [2024-11-27 07:28:04.274341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.328 [2024-11-27 07:28:04.281976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee4de8 00:32:53.328 [2024-11-27 07:28:04.282736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.328 [2024-11-27 07:28:04.282751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.328 [2024-11-27 07:28:04.290396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eed0b0 00:32:53.328 [2024-11-27 07:28:04.291176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.328 [2024-11-27 07:28:04.291191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.328 [2024-11-27 07:28:04.298788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016efd208 00:32:53.328 [2024-11-27 07:28:04.299573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.328 [2024-11-27 07:28:04.299588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.328 [2024-11-27 07:28:04.307211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee3060 00:32:53.328 [2024-11-27 07:28:04.308003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.328 [2024-11-27 07:28:04.308021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.328 [2024-11-27 07:28:04.315630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eec408 00:32:53.328 [2024-11-27 07:28:04.316427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.328 [2024-11-27 07:28:04.316443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.328 [2024-11-27 07:28:04.324050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef81e0 00:32:53.328 [2024-11-27 07:28:04.324850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.328 [2024-11-27 07:28:04.324866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.328 [2024-11-27 07:28:04.332465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee7c50 00:32:53.328 [2024-11-27 07:28:04.333220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.328 [2024-11-27 07:28:04.333236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.328 [2024-11-27 07:28:04.340849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee6300 00:32:53.328 [2024-11-27 07:28:04.341624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.328 [2024-11-27 07:28:04.341639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.328 [2024-11-27 07:28:04.349246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eee190 00:32:53.328 [2024-11-27 07:28:04.350039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.328 [2024-11-27 07:28:04.350054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.328 [2024-11-27 07:28:04.357661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016efe720 00:32:53.328 [2024-11-27 07:28:04.358415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.328 [2024-11-27 07:28:04.358430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.328 [2024-11-27 07:28:04.366076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eea248 00:32:53.328 [2024-11-27 07:28:04.366878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.328 [2024-11-27 07:28:04.366894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.328 [2024-11-27 07:28:04.374487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eddc00 00:32:53.328 [2024-11-27 07:28:04.375226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.328 [2024-11-27 07:28:04.375241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.328 [2024-11-27 07:28:04.383024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee6fa8 00:32:53.328 [2024-11-27 07:28:04.383829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.328 [2024-11-27 07:28:04.383845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.328 [2024-11-27 07:28:04.391433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef8e88 00:32:53.328 [2024-11-27 07:28:04.392206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.329 [2024-11-27 07:28:04.392221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.329 [2024-11-27 07:28:04.399836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef4f40 00:32:53.329 [2024-11-27 07:28:04.400631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.329 [2024-11-27 07:28:04.400647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.329 [2024-11-27 07:28:04.408250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef4b08 00:32:53.329 [2024-11-27 07:28:04.409050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.329 [2024-11-27 07:28:04.409066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.329 [2024-11-27 07:28:04.416675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef92c0 00:32:53.329 [2024-11-27 07:28:04.417455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.329 [2024-11-27 07:28:04.417470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.329 [2024-11-27 07:28:04.425074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eef270 00:32:53.329 [2024-11-27 07:28:04.425833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.329 [2024-11-27 07:28:04.425849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.329 [2024-11-27 07:28:04.433478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016efd640 00:32:53.329 [2024-11-27 07:28:04.434221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.329 [2024-11-27 07:28:04.434237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.329 [2024-11-27 07:28:04.441870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016efa7d8 00:32:53.329 [2024-11-27 07:28:04.442652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.329 [2024-11-27 07:28:04.442667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.329 [2024-11-27 07:28:04.450279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eebfd0 00:32:53.329 [2024-11-27 07:28:04.451065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.329 [2024-11-27 07:28:04.451080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.329 [2024-11-27 07:28:04.458695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee2c28 00:32:53.329 [2024-11-27 07:28:04.459468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.329 [2024-11-27 07:28:04.459483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.329 [2024-11-27 07:28:04.467104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee3d08 00:32:53.329 [2024-11-27 07:28:04.467897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.329 [2024-11-27 07:28:04.467912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.329 [2024-11-27 07:28:04.475497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eed4e8 00:32:53.329 [2024-11-27 07:28:04.476279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.329 [2024-11-27 07:28:04.476295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.329 [2024-11-27 07:28:04.483892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee6738 00:32:53.329 [2024-11-27 07:28:04.484678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.329 [2024-11-27 07:28:04.484694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.329 [2024-11-27 07:28:04.492300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016efc998 00:32:53.329 [2024-11-27 07:28:04.493084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.329 [2024-11-27 07:28:04.493099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.329 [2024-11-27 07:28:04.500714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016efda78 00:32:53.329 [2024-11-27 07:28:04.501477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.329 [2024-11-27 07:28:04.501493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.329 [2024-11-27 07:28:04.509126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee1f80 00:32:53.329 [2024-11-27 07:28:04.509922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.329 [2024-11-27 07:28:04.509938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.329 [2024-11-27 07:28:04.517534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef96f8 00:32:53.329 [2024-11-27 07:28:04.518326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.329 [2024-11-27 07:28:04.518342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.329 [2024-11-27 07:28:04.525924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef1868 00:32:53.329 [2024-11-27 07:28:04.526731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.329 [2024-11-27 07:28:04.526757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.591 [2024-11-27 07:28:04.534331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef8a50 00:32:53.591 [2024-11-27 07:28:04.535117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.591 [2024-11-27 07:28:04.535132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.591 [2024-11-27 07:28:04.542741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eff3c8 00:32:53.591 [2024-11-27 07:28:04.543532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.591 [2024-11-27 07:28:04.543547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.591 [2024-11-27 07:28:04.551155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eeea00 00:32:53.591 [2024-11-27 07:28:04.551939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.591 [2024-11-27 07:28:04.551954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.591 [2024-11-27 07:28:04.559574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee4de8 00:32:53.591 [2024-11-27 07:28:04.560329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.591 [2024-11-27 07:28:04.560345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.591 [2024-11-27 07:28:04.568050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eed0b0 00:32:53.591 [2024-11-27 07:28:04.568835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.591 [2024-11-27 07:28:04.568850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.591 [2024-11-27 07:28:04.576449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016efd208 00:32:53.591 [2024-11-27 07:28:04.577224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.591 [2024-11-27 07:28:04.577239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.591 [2024-11-27 07:28:04.584853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee3060 00:32:53.591 [2024-11-27 07:28:04.585657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.591 [2024-11-27 07:28:04.585673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.591 [2024-11-27 07:28:04.593283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eec408 00:32:53.591 [2024-11-27 07:28:04.594063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.591 [2024-11-27 07:28:04.594078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.591 [2024-11-27 07:28:04.601703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef81e0 00:32:53.591 [2024-11-27 07:28:04.602475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.591 [2024-11-27 07:28:04.602491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.591 [2024-11-27 07:28:04.610110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee7c50 00:32:53.591 [2024-11-27 07:28:04.610901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.591 [2024-11-27 07:28:04.610916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.591 [2024-11-27 07:28:04.618515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee6300 00:32:53.591 [2024-11-27 07:28:04.619314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.591 [2024-11-27 07:28:04.619330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.591 [2024-11-27 07:28:04.626914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eee190 00:32:53.591 [2024-11-27 07:28:04.627697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.591 [2024-11-27 07:28:04.627712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.591 [2024-11-27 07:28:04.635338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016efe720 00:32:53.591 [2024-11-27 07:28:04.636115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.591 [2024-11-27 07:28:04.636131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.591 [2024-11-27 07:28:04.643749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eea248 00:32:53.591 [2024-11-27 07:28:04.644535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.591 [2024-11-27 07:28:04.644550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.591 [2024-11-27 07:28:04.652163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016eddc00 00:32:53.591 [2024-11-27 07:28:04.652952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.591 [2024-11-27 07:28:04.652967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.591 [2024-11-27 07:28:04.660562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ee6fa8 00:32:53.591 [2024-11-27 07:28:04.661320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.591 [2024-11-27 07:28:04.661335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.591 [2024-11-27 07:28:04.668967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef8e88 00:32:53.591 [2024-11-27 07:28:04.669764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.591 [2024-11-27 07:28:04.669779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.591 [2024-11-27 07:28:04.677386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef4f40 00:32:53.591 [2024-11-27 07:28:04.678185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.591 [2024-11-27 07:28:04.678200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.591 [2024-11-27 07:28:04.685807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef4b08 00:32:53.591 [2024-11-27 07:28:04.686590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.591 [2024-11-27 07:28:04.686606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.591 [2024-11-27 07:28:04.694226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee510) with pdu=0x200016ef92c0 00:32:53.591 [2024-11-27 07:28:04.695012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.591 [2024-11-27 07:28:04.695027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:53.592 30219.50 IOPS, 118.04 MiB/s 00:32:53.592 Latency(us) 00:32:53.592 [2024-11-27T06:28:04.797Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:53.592 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:53.592 nvme0n1 : 2.01 30222.45 118.06 0.00 0.00 4230.26 1706.67 13926.40 00:32:53.592 [2024-11-27T06:28:04.797Z] =================================================================================================================== 00:32:53.592 [2024-11-27T06:28:04.797Z] Total : 30222.45 118.06 0.00 0.00 4230.26 1706.67 13926.40 00:32:53.592 { 00:32:53.592 "results": [ 00:32:53.592 { 00:32:53.592 "job": "nvme0n1", 00:32:53.592 "core_mask": "0x2", 00:32:53.592 "workload": "randwrite", 00:32:53.592 "status": "finished", 00:32:53.592 "queue_depth": 128, 00:32:53.592 "io_size": 4096, 00:32:53.592 "runtime": 2.006191, 00:32:53.592 "iops": 30222.446417115818, 00:32:53.592 "mibps": 118.05643131685866, 00:32:53.592 "io_failed": 0, 00:32:53.592 "io_timeout": 0, 00:32:53.592 "avg_latency_us": 4230.262975766372, 00:32:53.592 "min_latency_us": 1706.6666666666667, 00:32:53.592 "max_latency_us": 13926.4 00:32:53.592 } 00:32:53.592 ], 00:32:53.592 "core_count": 1 00:32:53.592 } 00:32:53.592 07:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:53.592 07:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:53.592 07:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:53.592 | .driver_specific 00:32:53.592 | .nvme_error 00:32:53.592 | .status_code 00:32:53.592 | .command_transient_transport_error' 00:32:53.592 07:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:53.852 07:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 237 > 0 )) 00:32:53.852 07:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2572385 00:32:53.852 07:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2572385 ']' 00:32:53.852 07:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2572385 00:32:53.852 07:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:32:53.852 07:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:53.852 07:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2572385 00:32:53.852 07:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:53.852 07:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:53.852 07:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2572385' 00:32:53.852 killing process with pid 2572385 00:32:53.853 07:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2572385 00:32:53.853 Received shutdown signal, test time was about 2.000000 seconds 00:32:53.853 00:32:53.853 Latency(us) 00:32:53.853 [2024-11-27T06:28:05.058Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:53.853 [2024-11-27T06:28:05.058Z] =================================================================================================================== 00:32:53.853 [2024-11-27T06:28:05.058Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:53.853 07:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2572385 00:32:54.114 07:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:32:54.114 07:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:54.114 07:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:32:54.114 07:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:32:54.114 07:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:32:54.114 07:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2573071 00:32:54.114 07:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2573071 /var/tmp/bperf.sock 00:32:54.114 07:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2573071 ']' 00:32:54.114 07:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:32:54.114 07:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:54.114 07:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:54.114 07:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:54.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:54.115 07:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:54.115 07:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:54.115 [2024-11-27 07:28:05.125711] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:32:54.115 [2024-11-27 07:28:05.125767] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2573071 ] 00:32:54.115 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:54.115 Zero copy mechanism will not be used. 00:32:54.115 [2024-11-27 07:28:05.208441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:54.115 [2024-11-27 07:28:05.236701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:55.057 07:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:55.057 07:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:32:55.057 07:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:55.057 07:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:55.057 07:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:55.057 07:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.057 07:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:55.057 07:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.057 07:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:55.057 07:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:55.319 nvme0n1 00:32:55.319 07:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:55.319 07:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.319 07:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:55.319 07:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.319 07:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:55.319 07:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:55.319 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:55.319 Zero copy mechanism will not be used. 00:32:55.319 Running I/O for 2 seconds... 00:32:55.319 [2024-11-27 07:28:06.457820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.319 [2024-11-27 07:28:06.457885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.319 [2024-11-27 07:28:06.457912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:55.319 [2024-11-27 07:28:06.462278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.319 [2024-11-27 07:28:06.462349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.319 [2024-11-27 07:28:06.462369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.320 [2024-11-27 07:28:06.466014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.320 [2024-11-27 07:28:06.466075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.320 [2024-11-27 07:28:06.466092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.320 [2024-11-27 07:28:06.469913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.320 [2024-11-27 07:28:06.469971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.320 [2024-11-27 07:28:06.469986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.320 [2024-11-27 07:28:06.473594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.320 [2024-11-27 07:28:06.473668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.320 [2024-11-27 07:28:06.473688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:55.320 [2024-11-27 07:28:06.477261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.320 [2024-11-27 07:28:06.477304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.320 [2024-11-27 07:28:06.477320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.320 [2024-11-27 07:28:06.480866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.320 [2024-11-27 07:28:06.480918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.320 [2024-11-27 07:28:06.480934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.320 [2024-11-27 07:28:06.484297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.320 [2024-11-27 07:28:06.484350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.320 [2024-11-27 07:28:06.484365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.320 [2024-11-27 07:28:06.487568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.320 [2024-11-27 07:28:06.487624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.320 [2024-11-27 07:28:06.487639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:55.320 [2024-11-27 07:28:06.491082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.320 [2024-11-27 07:28:06.491130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.320 [2024-11-27 07:28:06.491145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.320 [2024-11-27 07:28:06.494419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.320 [2024-11-27 07:28:06.494474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.320 [2024-11-27 07:28:06.494490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.320 [2024-11-27 07:28:06.497764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.320 [2024-11-27 07:28:06.497826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.320 [2024-11-27 07:28:06.497841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.320 [2024-11-27 07:28:06.501269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.320 [2024-11-27 07:28:06.501325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.320 [2024-11-27 07:28:06.501340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:55.320 [2024-11-27 07:28:06.504914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.320 [2024-11-27 07:28:06.504970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.320 [2024-11-27 07:28:06.504986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.320 [2024-11-27 07:28:06.508738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.320 [2024-11-27 07:28:06.508790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.320 [2024-11-27 07:28:06.508806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.320 [2024-11-27 07:28:06.512427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.320 [2024-11-27 07:28:06.512488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.320 [2024-11-27 07:28:06.512503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.320 [2024-11-27 07:28:06.516472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.320 [2024-11-27 07:28:06.516540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.320 [2024-11-27 07:28:06.516555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:55.320 [2024-11-27 07:28:06.520759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.320 [2024-11-27 07:28:06.520802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.320 [2024-11-27 07:28:06.520818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.583 [2024-11-27 07:28:06.524838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.583 [2024-11-27 07:28:06.524898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.583 [2024-11-27 07:28:06.524913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.583 [2024-11-27 07:28:06.528904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.583 [2024-11-27 07:28:06.528955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.583 [2024-11-27 07:28:06.528970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.583 [2024-11-27 07:28:06.532803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.583 [2024-11-27 07:28:06.532858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.583 [2024-11-27 07:28:06.532874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:55.583 [2024-11-27 07:28:06.537013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.583 [2024-11-27 07:28:06.537074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.583 [2024-11-27 07:28:06.537089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.583 [2024-11-27 07:28:06.541156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.583 [2024-11-27 07:28:06.541219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.583 [2024-11-27 07:28:06.541234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.583 [2024-11-27 07:28:06.545154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.583 [2024-11-27 07:28:06.545221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.583 [2024-11-27 07:28:06.545237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.583 [2024-11-27 07:28:06.549317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.583 [2024-11-27 07:28:06.549362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.583 [2024-11-27 07:28:06.549377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:55.583 [2024-11-27 07:28:06.553508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.583 [2024-11-27 07:28:06.553562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.583 [2024-11-27 07:28:06.553577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.583 [2024-11-27 07:28:06.560733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.583 [2024-11-27 07:28:06.560792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.583 [2024-11-27 07:28:06.560807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.583 [2024-11-27 07:28:06.565185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.584 [2024-11-27 07:28:06.565249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.584 [2024-11-27 07:28:06.565265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.584 [2024-11-27 07:28:06.569738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.584 [2024-11-27 07:28:06.569795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.584 [2024-11-27 07:28:06.569810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:55.584 [2024-11-27 07:28:06.573523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.584 [2024-11-27 07:28:06.573584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.584 [2024-11-27 07:28:06.573600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.584 [2024-11-27 07:28:06.577374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.584 [2024-11-27 07:28:06.577435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.584 [2024-11-27 07:28:06.577453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.584 [2024-11-27 07:28:06.581625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.584 [2024-11-27 07:28:06.581692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.584 [2024-11-27 07:28:06.581707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.584 [2024-11-27 07:28:06.585614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.584 [2024-11-27 07:28:06.585673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.584 [2024-11-27 07:28:06.585688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:55.584 [2024-11-27 07:28:06.589466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.584 [2024-11-27 07:28:06.589522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.584 [2024-11-27 07:28:06.589537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.584 [2024-11-27 07:28:06.593701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.584 [2024-11-27 07:28:06.593753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.584 [2024-11-27 07:28:06.593768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.584 [2024-11-27 07:28:06.597718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.584 [2024-11-27 07:28:06.597773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.584 [2024-11-27 07:28:06.597788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.584 [2024-11-27 07:28:06.603878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.584 [2024-11-27 07:28:06.603936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.584 [2024-11-27 07:28:06.603951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:55.584 [2024-11-27 07:28:06.608565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.584 [2024-11-27 07:28:06.608618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.584 [2024-11-27 07:28:06.608633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.584 [2024-11-27 07:28:06.612842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.584 [2024-11-27 07:28:06.612905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.584 [2024-11-27 07:28:06.612920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.584 [2024-11-27 07:28:06.617171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.584 [2024-11-27 07:28:06.617225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.584 [2024-11-27 07:28:06.617240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.584 [2024-11-27 07:28:06.620840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.584 [2024-11-27 07:28:06.620899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.584 [2024-11-27 07:28:06.620914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:55.584 [2024-11-27 07:28:06.624648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.584 [2024-11-27 07:28:06.624704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.584 [2024-11-27 07:28:06.624719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.584 [2024-11-27 07:28:06.628480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.584 [2024-11-27 07:28:06.628539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.584 [2024-11-27 07:28:06.628554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.584 [2024-11-27 07:28:06.632197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.584 [2024-11-27 07:28:06.632253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.584 [2024-11-27 07:28:06.632269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.584 [2024-11-27 07:28:06.635756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.584 [2024-11-27 07:28:06.635803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.584 [2024-11-27 07:28:06.635818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:55.584 [2024-11-27 07:28:06.639355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.584 [2024-11-27 07:28:06.639406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.584 [2024-11-27 07:28:06.639421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.584 [2024-11-27 07:28:06.642920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.584 [2024-11-27 07:28:06.642964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.584 [2024-11-27 07:28:06.642979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.584 [2024-11-27 07:28:06.646628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.584 [2024-11-27 07:28:06.646686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.584 [2024-11-27 07:28:06.646701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.584 [2024-11-27 07:28:06.650186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.584 [2024-11-27 07:28:06.650247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.584 [2024-11-27 07:28:06.650262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:55.584 [2024-11-27 07:28:06.653815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.584 [2024-11-27 07:28:06.653865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.584 [2024-11-27 07:28:06.653881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.584 [2024-11-27 07:28:06.657348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.584 [2024-11-27 07:28:06.657403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.584 [2024-11-27 07:28:06.657417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.584 [2024-11-27 07:28:06.660838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.584 [2024-11-27 07:28:06.660898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.584 [2024-11-27 07:28:06.660913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.584 [2024-11-27 07:28:06.664375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.584 [2024-11-27 07:28:06.664431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.584 [2024-11-27 07:28:06.664447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:55.584 [2024-11-27 07:28:06.667766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.584 [2024-11-27 07:28:06.667814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.584 [2024-11-27 07:28:06.667829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.584 [2024-11-27 07:28:06.672216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.585 [2024-11-27 07:28:06.672261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.585 [2024-11-27 07:28:06.672276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.585 [2024-11-27 07:28:06.677463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.585 [2024-11-27 07:28:06.677591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.585 [2024-11-27 07:28:06.677607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.585 [2024-11-27 07:28:06.682399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.585 [2024-11-27 07:28:06.682456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.585 [2024-11-27 07:28:06.682475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:55.585 [2024-11-27 07:28:06.686138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.585 [2024-11-27 07:28:06.686200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.585 [2024-11-27 07:28:06.686215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.585 [2024-11-27 07:28:06.689860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.585 [2024-11-27 07:28:06.689918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.585 [2024-11-27 07:28:06.689933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.585 [2024-11-27 07:28:06.693501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.585 [2024-11-27 07:28:06.693547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.585 [2024-11-27 07:28:06.693562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.585 [2024-11-27 07:28:06.697016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.585 [2024-11-27 07:28:06.697069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.585 [2024-11-27 07:28:06.697084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:55.585 [2024-11-27 07:28:06.700720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.585 [2024-11-27 07:28:06.700772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.585 [2024-11-27 07:28:06.700788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.585 [2024-11-27 07:28:06.704489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.585 [2024-11-27 07:28:06.704535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.585 [2024-11-27 07:28:06.704549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.585 [2024-11-27 07:28:06.707988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.585 [2024-11-27 07:28:06.708043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.585 [2024-11-27 07:28:06.708058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.585 [2024-11-27 07:28:06.711569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.585 [2024-11-27 07:28:06.711618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.585 [2024-11-27 07:28:06.711633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:55.585 [2024-11-27 07:28:06.715116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.585 [2024-11-27 07:28:06.715177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.585 [2024-11-27 07:28:06.715193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.585 [2024-11-27 07:28:06.718422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.585 [2024-11-27 07:28:06.718477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.585 [2024-11-27 07:28:06.718492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.585 [2024-11-27 07:28:06.722005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.585 [2024-11-27 07:28:06.722066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.585 [2024-11-27 07:28:06.722081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.585 [2024-11-27 07:28:06.725455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.585 [2024-11-27 07:28:06.725511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.585 [2024-11-27 07:28:06.725527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:55.585 [2024-11-27 07:28:06.728655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.585 [2024-11-27 07:28:06.728698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.585 [2024-11-27 07:28:06.728713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.585 [2024-11-27 07:28:06.731952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.585 [2024-11-27 07:28:06.732004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.585 [2024-11-27 07:28:06.732019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.585 [2024-11-27 07:28:06.735144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.585 [2024-11-27 07:28:06.735206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.585 [2024-11-27 07:28:06.735222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.585 [2024-11-27 07:28:06.738929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.585 [2024-11-27 07:28:06.738980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.585 [2024-11-27 07:28:06.738995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:55.585 [2024-11-27 07:28:06.742435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.585 [2024-11-27 07:28:06.742480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.585 [2024-11-27 07:28:06.742495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.585 [2024-11-27 07:28:06.745678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.585 [2024-11-27 07:28:06.745729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.585 [2024-11-27 07:28:06.745744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.585 [2024-11-27 07:28:06.748911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.585 [2024-11-27 07:28:06.748964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.585 [2024-11-27 07:28:06.748979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.585 [2024-11-27 07:28:06.752478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.585 [2024-11-27 07:28:06.752522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.585 [2024-11-27 07:28:06.752537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:55.585 [2024-11-27 07:28:06.755824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.585 [2024-11-27 07:28:06.755879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.585 [2024-11-27 07:28:06.755894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.585 [2024-11-27 07:28:06.759146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.585 [2024-11-27 07:28:06.759209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.585 [2024-11-27 07:28:06.759223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.585 [2024-11-27 07:28:06.762674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.585 [2024-11-27 07:28:06.762736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.585 [2024-11-27 07:28:06.762750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.585 [2024-11-27 07:28:06.765939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.585 [2024-11-27 07:28:06.766000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.585 [2024-11-27 07:28:06.766015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:55.585 [2024-11-27 07:28:06.769134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.586 [2024-11-27 07:28:06.769195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.586 [2024-11-27 07:28:06.769211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.586 [2024-11-27 07:28:06.775329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.586 [2024-11-27 07:28:06.775387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.586 [2024-11-27 07:28:06.775404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.586 [2024-11-27 07:28:06.778847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.586 [2024-11-27 07:28:06.778903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.586 [2024-11-27 07:28:06.778918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.586 [2024-11-27 07:28:06.782211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.586 [2024-11-27 07:28:06.782257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.586 [2024-11-27 07:28:06.782272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:55.849 [2024-11-27 07:28:06.785722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.849 [2024-11-27 07:28:06.785770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.849 [2024-11-27 07:28:06.785785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.849 [2024-11-27 07:28:06.789213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.849 [2024-11-27 07:28:06.789268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.849 [2024-11-27 07:28:06.789283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.849 [2024-11-27 07:28:06.792687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.849 [2024-11-27 07:28:06.792743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.849 [2024-11-27 07:28:06.792758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.849 [2024-11-27 07:28:06.796155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.849 [2024-11-27 07:28:06.796212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.849 [2024-11-27 07:28:06.796227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:55.849 [2024-11-27 07:28:06.799670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.849 [2024-11-27 07:28:06.799715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.849 [2024-11-27 07:28:06.799730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.849 [2024-11-27 07:28:06.803096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.849 [2024-11-27 07:28:06.803144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.849 [2024-11-27 07:28:06.803163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.849 [2024-11-27 07:28:06.806586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.849 [2024-11-27 07:28:06.806651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.849 [2024-11-27 07:28:06.806667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.849 [2024-11-27 07:28:06.810046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.849 [2024-11-27 07:28:06.810097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.849 [2024-11-27 07:28:06.810112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:55.849 [2024-11-27 07:28:06.813415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.849 [2024-11-27 07:28:06.813472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.849 [2024-11-27 07:28:06.813487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.849 [2024-11-27 07:28:06.816697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.849 [2024-11-27 07:28:06.816776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.849 [2024-11-27 07:28:06.816791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.849 [2024-11-27 07:28:06.820914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.849 [2024-11-27 07:28:06.821107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.849 [2024-11-27 07:28:06.821122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.849 [2024-11-27 07:28:06.828834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.849 [2024-11-27 07:28:06.829022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.849 [2024-11-27 07:28:06.829037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:55.849 [2024-11-27 07:28:06.833851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.849 [2024-11-27 07:28:06.833974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.849 [2024-11-27 07:28:06.833990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.849 [2024-11-27 07:28:06.837993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.849 [2024-11-27 07:28:06.838049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.849 [2024-11-27 07:28:06.838064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.849 [2024-11-27 07:28:06.842111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.849 [2024-11-27 07:28:06.842234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.849 [2024-11-27 07:28:06.842249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.849 [2024-11-27 07:28:06.846362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.849 [2024-11-27 07:28:06.846485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.849 [2024-11-27 07:28:06.846500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:55.849 [2024-11-27 07:28:06.850576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.849 [2024-11-27 07:28:06.850697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.849 [2024-11-27 07:28:06.850712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.849 [2024-11-27 07:28:06.854742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.849 [2024-11-27 07:28:06.854859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.849 [2024-11-27 07:28:06.854874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.849 [2024-11-27 07:28:06.858711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.849 [2024-11-27 07:28:06.858759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.849 [2024-11-27 07:28:06.858773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.849 [2024-11-27 07:28:06.862229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.849 [2024-11-27 07:28:06.862276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.849 [2024-11-27 07:28:06.862291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:55.849 [2024-11-27 07:28:06.865847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.849 [2024-11-27 07:28:06.865894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.849 [2024-11-27 07:28:06.865909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.849 [2024-11-27 07:28:06.869395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.849 [2024-11-27 07:28:06.869442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.849 [2024-11-27 07:28:06.869457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.849 [2024-11-27 07:28:06.872805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.849 [2024-11-27 07:28:06.872852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.849 [2024-11-27 07:28:06.872868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.849 [2024-11-27 07:28:06.876451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.849 [2024-11-27 07:28:06.876497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.849 [2024-11-27 07:28:06.876515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:55.849 [2024-11-27 07:28:06.879940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.850 [2024-11-27 07:28:06.880002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.850 [2024-11-27 07:28:06.880017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.850 [2024-11-27 07:28:06.883580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.850 [2024-11-27 07:28:06.883637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.850 [2024-11-27 07:28:06.883652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.850 [2024-11-27 07:28:06.887661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.850 [2024-11-27 07:28:06.887714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.850 [2024-11-27 07:28:06.887729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.850 [2024-11-27 07:28:06.891487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.850 [2024-11-27 07:28:06.891571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.850 [2024-11-27 07:28:06.891587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:55.850 [2024-11-27 07:28:06.896004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.850 [2024-11-27 07:28:06.896055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.850 [2024-11-27 07:28:06.896070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.850 [2024-11-27 07:28:06.899678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.850 [2024-11-27 07:28:06.899738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.850 [2024-11-27 07:28:06.899753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.850 [2024-11-27 07:28:06.903317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.850 [2024-11-27 07:28:06.903378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.850 [2024-11-27 07:28:06.903393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.850 [2024-11-27 07:28:06.907130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.850 [2024-11-27 07:28:06.907194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.850 [2024-11-27 07:28:06.907208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:55.850 [2024-11-27 07:28:06.910871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.850 [2024-11-27 07:28:06.910936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.850 [2024-11-27 07:28:06.910951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.850 [2024-11-27 07:28:06.914679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.850 [2024-11-27 07:28:06.914742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.850 [2024-11-27 07:28:06.914758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.850 [2024-11-27 07:28:06.918577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.850 [2024-11-27 07:28:06.918631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.850 [2024-11-27 07:28:06.918646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.850 [2024-11-27 07:28:06.922484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.850 [2024-11-27 07:28:06.922534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.850 [2024-11-27 07:28:06.922549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:55.850 [2024-11-27 07:28:06.926322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.850 [2024-11-27 07:28:06.926367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.850 [2024-11-27 07:28:06.926382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.850 [2024-11-27 07:28:06.930105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.850 [2024-11-27 07:28:06.930171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.850 [2024-11-27 07:28:06.930186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.850 [2024-11-27 07:28:06.936046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.850 [2024-11-27 07:28:06.936170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.850 [2024-11-27 07:28:06.936186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.850 [2024-11-27 07:28:06.939974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.850 [2024-11-27 07:28:06.940036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.850 [2024-11-27 07:28:06.940052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:55.850 [2024-11-27 07:28:06.943572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.850 [2024-11-27 07:28:06.943630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.850 [2024-11-27 07:28:06.943645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.850 [2024-11-27 07:28:06.947109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.850 [2024-11-27 07:28:06.947165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.850 [2024-11-27 07:28:06.947181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.850 [2024-11-27 07:28:06.950451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.850 [2024-11-27 07:28:06.950518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.850 [2024-11-27 07:28:06.950533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.850 [2024-11-27 07:28:06.955103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.850 [2024-11-27 07:28:06.955184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.850 [2024-11-27 07:28:06.955199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:55.850 [2024-11-27 07:28:06.959767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.850 [2024-11-27 07:28:06.959829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.850 [2024-11-27 07:28:06.959844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.850 [2024-11-27 07:28:06.963831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.850 [2024-11-27 07:28:06.963891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.850 [2024-11-27 07:28:06.963907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.850 [2024-11-27 07:28:06.968058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.850 [2024-11-27 07:28:06.968119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.850 [2024-11-27 07:28:06.968135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.850 [2024-11-27 07:28:06.972194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.850 [2024-11-27 07:28:06.972256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.850 [2024-11-27 07:28:06.972271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:55.850 [2024-11-27 07:28:06.976030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.850 [2024-11-27 07:28:06.976093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.850 [2024-11-27 07:28:06.976108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.850 [2024-11-27 07:28:06.979943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.850 [2024-11-27 07:28:06.980016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.850 [2024-11-27 07:28:06.980034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.850 [2024-11-27 07:28:06.984647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.850 [2024-11-27 07:28:06.984701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.850 [2024-11-27 07:28:06.984716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.850 [2024-11-27 07:28:06.988918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.850 [2024-11-27 07:28:06.988971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.851 [2024-11-27 07:28:06.988986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:55.851 [2024-11-27 07:28:06.993097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.851 [2024-11-27 07:28:06.993157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.851 [2024-11-27 07:28:06.993177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.851 [2024-11-27 07:28:06.997225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.851 [2024-11-27 07:28:06.997282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.851 [2024-11-27 07:28:06.997298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.851 [2024-11-27 07:28:07.001049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.851 [2024-11-27 07:28:07.001108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.851 [2024-11-27 07:28:07.001123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.851 [2024-11-27 07:28:07.004819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.851 [2024-11-27 07:28:07.004879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.851 [2024-11-27 07:28:07.004894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:55.851 [2024-11-27 07:28:07.008762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.851 [2024-11-27 07:28:07.008817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.851 [2024-11-27 07:28:07.008832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.851 [2024-11-27 07:28:07.012362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.851 [2024-11-27 07:28:07.012562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.851 [2024-11-27 07:28:07.012577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.851 [2024-11-27 07:28:07.016448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.851 [2024-11-27 07:28:07.016650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.851 [2024-11-27 07:28:07.016666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.851 [2024-11-27 07:28:07.023267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.851 [2024-11-27 07:28:07.023372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.851 [2024-11-27 07:28:07.023388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:55.851 [2024-11-27 07:28:07.027394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.851 [2024-11-27 07:28:07.027594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.851 [2024-11-27 07:28:07.027609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.851 [2024-11-27 07:28:07.031486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.851 [2024-11-27 07:28:07.031690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.851 [2024-11-27 07:28:07.031705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.851 [2024-11-27 07:28:07.035189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.851 [2024-11-27 07:28:07.035391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.851 [2024-11-27 07:28:07.035407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.851 [2024-11-27 07:28:07.038835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.851 [2024-11-27 07:28:07.039033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.851 [2024-11-27 07:28:07.039048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:55.851 [2024-11-27 07:28:07.042726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.851 [2024-11-27 07:28:07.042933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.851 [2024-11-27 07:28:07.042948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.851 [2024-11-27 07:28:07.046534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:55.851 [2024-11-27 07:28:07.046736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.851 [2024-11-27 07:28:07.046751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.115 [2024-11-27 07:28:07.051103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.115 [2024-11-27 07:28:07.051272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.115 [2024-11-27 07:28:07.051287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.115 [2024-11-27 07:28:07.055364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.115 [2024-11-27 07:28:07.055568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.115 [2024-11-27 07:28:07.055583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.115 [2024-11-27 07:28:07.059337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.115 [2024-11-27 07:28:07.059531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.115 [2024-11-27 07:28:07.059546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.115 [2024-11-27 07:28:07.063924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.115 [2024-11-27 07:28:07.063997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.115 [2024-11-27 07:28:07.064012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.115 [2024-11-27 07:28:07.067958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.115 [2024-11-27 07:28:07.068170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.115 [2024-11-27 07:28:07.068186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.115 [2024-11-27 07:28:07.071956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.115 [2024-11-27 07:28:07.072155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.115 [2024-11-27 07:28:07.072176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.115 [2024-11-27 07:28:07.075865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.115 [2024-11-27 07:28:07.076059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.115 [2024-11-27 07:28:07.076075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.115 [2024-11-27 07:28:07.079254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.115 [2024-11-27 07:28:07.079460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.115 [2024-11-27 07:28:07.079476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.115 [2024-11-27 07:28:07.082948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.115 [2024-11-27 07:28:07.083144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.115 [2024-11-27 07:28:07.083166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.115 [2024-11-27 07:28:07.086553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.115 [2024-11-27 07:28:07.086750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.115 [2024-11-27 07:28:07.086769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.115 [2024-11-27 07:28:07.090228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.115 [2024-11-27 07:28:07.090412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.115 [2024-11-27 07:28:07.090427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.115 [2024-11-27 07:28:07.093528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.115 [2024-11-27 07:28:07.093707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.115 [2024-11-27 07:28:07.093723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.115 [2024-11-27 07:28:07.096624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.115 [2024-11-27 07:28:07.096810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.115 [2024-11-27 07:28:07.096825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.115 [2024-11-27 07:28:07.099829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.115 [2024-11-27 07:28:07.100012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.115 [2024-11-27 07:28:07.100028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.115 [2024-11-27 07:28:07.102879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.115 [2024-11-27 07:28:07.103068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.115 [2024-11-27 07:28:07.103084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.115 [2024-11-27 07:28:07.106189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.115 [2024-11-27 07:28:07.106370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.115 [2024-11-27 07:28:07.106385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.115 [2024-11-27 07:28:07.109191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.115 [2024-11-27 07:28:07.109372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.115 [2024-11-27 07:28:07.109387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.115 [2024-11-27 07:28:07.112211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.115 [2024-11-27 07:28:07.112394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.115 [2024-11-27 07:28:07.112409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.115 [2024-11-27 07:28:07.115200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.115 [2024-11-27 07:28:07.115401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.115 [2024-11-27 07:28:07.115416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.115 [2024-11-27 07:28:07.118397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.115 [2024-11-27 07:28:07.118583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.115 [2024-11-27 07:28:07.118598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.115 [2024-11-27 07:28:07.121421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.115 [2024-11-27 07:28:07.121615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.115 [2024-11-27 07:28:07.121630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.115 [2024-11-27 07:28:07.124476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.115 [2024-11-27 07:28:07.124667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.115 [2024-11-27 07:28:07.124682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.115 [2024-11-27 07:28:07.127511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.115 [2024-11-27 07:28:07.127694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.115 [2024-11-27 07:28:07.127709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.115 [2024-11-27 07:28:07.130681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.115 [2024-11-27 07:28:07.130872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.115 [2024-11-27 07:28:07.130887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.115 [2024-11-27 07:28:07.133706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.115 [2024-11-27 07:28:07.133906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.115 [2024-11-27 07:28:07.133921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.115 [2024-11-27 07:28:07.136719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.116 [2024-11-27 07:28:07.136910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.116 [2024-11-27 07:28:07.136926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.116 [2024-11-27 07:28:07.139722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.116 [2024-11-27 07:28:07.139909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.116 [2024-11-27 07:28:07.139924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.116 [2024-11-27 07:28:07.142717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.116 [2024-11-27 07:28:07.142912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.116 [2024-11-27 07:28:07.142927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.116 [2024-11-27 07:28:07.145722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.116 [2024-11-27 07:28:07.145901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.116 [2024-11-27 07:28:07.145916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.116 [2024-11-27 07:28:07.148747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.116 [2024-11-27 07:28:07.148931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.116 [2024-11-27 07:28:07.148946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.116 [2024-11-27 07:28:07.152949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.116 [2024-11-27 07:28:07.153072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.116 [2024-11-27 07:28:07.153087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.116 [2024-11-27 07:28:07.157806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.116 [2024-11-27 07:28:07.158025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.116 [2024-11-27 07:28:07.158040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.116 [2024-11-27 07:28:07.161550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.116 [2024-11-27 07:28:07.161735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.116 [2024-11-27 07:28:07.161750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.116 [2024-11-27 07:28:07.164582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.116 [2024-11-27 07:28:07.164787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.116 [2024-11-27 07:28:07.164802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.116 [2024-11-27 07:28:07.167754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.116 [2024-11-27 07:28:07.167937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.116 [2024-11-27 07:28:07.167952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.116 [2024-11-27 07:28:07.171143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.116 [2024-11-27 07:28:07.171350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.116 [2024-11-27 07:28:07.171369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.116 [2024-11-27 07:28:07.174474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.116 [2024-11-27 07:28:07.174713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.116 [2024-11-27 07:28:07.174728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.116 [2024-11-27 07:28:07.177883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.116 [2024-11-27 07:28:07.178064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.116 [2024-11-27 07:28:07.178080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.116 [2024-11-27 07:28:07.180960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.116 [2024-11-27 07:28:07.181181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.116 [2024-11-27 07:28:07.181197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.116 [2024-11-27 07:28:07.184358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.116 [2024-11-27 07:28:07.184554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.116 [2024-11-27 07:28:07.184569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.116 [2024-11-27 07:28:07.187399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.116 [2024-11-27 07:28:07.187579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.116 [2024-11-27 07:28:07.187594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.116 [2024-11-27 07:28:07.190473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.116 [2024-11-27 07:28:07.190658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.116 [2024-11-27 07:28:07.190673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.116 [2024-11-27 07:28:07.193772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.116 [2024-11-27 07:28:07.193957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.116 [2024-11-27 07:28:07.193972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.116 [2024-11-27 07:28:07.196804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.116 [2024-11-27 07:28:07.197001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.116 [2024-11-27 07:28:07.197016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.116 [2024-11-27 07:28:07.199842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.116 [2024-11-27 07:28:07.200027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.116 [2024-11-27 07:28:07.200043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.116 [2024-11-27 07:28:07.202948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.116 [2024-11-27 07:28:07.203181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.116 [2024-11-27 07:28:07.203196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.116 [2024-11-27 07:28:07.206107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.116 [2024-11-27 07:28:07.206312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.116 [2024-11-27 07:28:07.206327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.116 [2024-11-27 07:28:07.209124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.116 [2024-11-27 07:28:07.209319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.116 [2024-11-27 07:28:07.209335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.116 [2024-11-27 07:28:07.212121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.116 [2024-11-27 07:28:07.212310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.116 [2024-11-27 07:28:07.212326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.116 [2024-11-27 07:28:07.215157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.116 [2024-11-27 07:28:07.215358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.116 [2024-11-27 07:28:07.215373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.116 [2024-11-27 07:28:07.218396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.116 [2024-11-27 07:28:07.218588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.116 [2024-11-27 07:28:07.218604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.116 [2024-11-27 07:28:07.221593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.116 [2024-11-27 07:28:07.221779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.116 [2024-11-27 07:28:07.221795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.116 [2024-11-27 07:28:07.226496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.117 [2024-11-27 07:28:07.226688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-11-27 07:28:07.226704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.117 [2024-11-27 07:28:07.230776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.117 [2024-11-27 07:28:07.230958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-11-27 07:28:07.230973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.117 [2024-11-27 07:28:07.234742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.117 [2024-11-27 07:28:07.234949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-11-27 07:28:07.234965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.117 [2024-11-27 07:28:07.239077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.117 [2024-11-27 07:28:07.239282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-11-27 07:28:07.239297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.117 [2024-11-27 07:28:07.244957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.117 [2024-11-27 07:28:07.245145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-11-27 07:28:07.245165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.117 [2024-11-27 07:28:07.248889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.117 [2024-11-27 07:28:07.249095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-11-27 07:28:07.249110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.117 [2024-11-27 07:28:07.253108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.117 [2024-11-27 07:28:07.253295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-11-27 07:28:07.253310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.117 [2024-11-27 07:28:07.256684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.117 [2024-11-27 07:28:07.256867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-11-27 07:28:07.256882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.117 [2024-11-27 07:28:07.260283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.117 [2024-11-27 07:28:07.260476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-11-27 07:28:07.260492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.117 [2024-11-27 07:28:07.263824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.117 [2024-11-27 07:28:07.264009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-11-27 07:28:07.264028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.117 [2024-11-27 07:28:07.268267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.117 [2024-11-27 07:28:07.268452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-11-27 07:28:07.268469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.117 [2024-11-27 07:28:07.272205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.117 [2024-11-27 07:28:07.272423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-11-27 07:28:07.272439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.117 [2024-11-27 07:28:07.275787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.117 [2024-11-27 07:28:07.275973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-11-27 07:28:07.275989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.117 [2024-11-27 07:28:07.279613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.117 [2024-11-27 07:28:07.279798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-11-27 07:28:07.279813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.117 [2024-11-27 07:28:07.283501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.117 [2024-11-27 07:28:07.283572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-11-27 07:28:07.283587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.117 [2024-11-27 07:28:07.287488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.117 [2024-11-27 07:28:07.287594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-11-27 07:28:07.287609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.117 [2024-11-27 07:28:07.290957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.117 [2024-11-27 07:28:07.291148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-11-27 07:28:07.291169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.117 [2024-11-27 07:28:07.294995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.117 [2024-11-27 07:28:07.295179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-11-27 07:28:07.295195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.117 [2024-11-27 07:28:07.298555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.117 [2024-11-27 07:28:07.298785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-11-27 07:28:07.298800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.117 [2024-11-27 07:28:07.302140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.117 [2024-11-27 07:28:07.302322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-11-27 07:28:07.302338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.117 [2024-11-27 07:28:07.305634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.117 [2024-11-27 07:28:07.305819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-11-27 07:28:07.305834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.117 [2024-11-27 07:28:07.309031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.117 [2024-11-27 07:28:07.309190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-11-27 07:28:07.309206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.117 [2024-11-27 07:28:07.312212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.117 [2024-11-27 07:28:07.312391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.117 [2024-11-27 07:28:07.312412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.117 [2024-11-27 07:28:07.315252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.380 [2024-11-27 07:28:07.315433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.380 [2024-11-27 07:28:07.315449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.380 [2024-11-27 07:28:07.318402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.380 [2024-11-27 07:28:07.318603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.380 [2024-11-27 07:28:07.318618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.380 [2024-11-27 07:28:07.321596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.380 [2024-11-27 07:28:07.321777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.380 [2024-11-27 07:28:07.321792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.380 [2024-11-27 07:28:07.324624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.380 [2024-11-27 07:28:07.324808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.380 [2024-11-27 07:28:07.324823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.380 [2024-11-27 07:28:07.327675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.380 [2024-11-27 07:28:07.327866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.380 [2024-11-27 07:28:07.327881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.380 [2024-11-27 07:28:07.330705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.380 [2024-11-27 07:28:07.330891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.380 [2024-11-27 07:28:07.330907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.380 [2024-11-27 07:28:07.333760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.380 [2024-11-27 07:28:07.333939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.380 [2024-11-27 07:28:07.333954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.380 [2024-11-27 07:28:07.338713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.380 [2024-11-27 07:28:07.338839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.380 [2024-11-27 07:28:07.338855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.380 [2024-11-27 07:28:07.345050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.380 [2024-11-27 07:28:07.345243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.380 [2024-11-27 07:28:07.345259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.380 [2024-11-27 07:28:07.348796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.380 [2024-11-27 07:28:07.348979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.380 [2024-11-27 07:28:07.348994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.380 [2024-11-27 07:28:07.352204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.380 [2024-11-27 07:28:07.352394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.380 [2024-11-27 07:28:07.352411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.380 [2024-11-27 07:28:07.355706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.380 [2024-11-27 07:28:07.355892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.380 [2024-11-27 07:28:07.355907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.380 [2024-11-27 07:28:07.359125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.380 [2024-11-27 07:28:07.359310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.380 [2024-11-27 07:28:07.359331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.380 [2024-11-27 07:28:07.362257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.380 [2024-11-27 07:28:07.362445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.380 [2024-11-27 07:28:07.362461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.380 [2024-11-27 07:28:07.365387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.381 [2024-11-27 07:28:07.365577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.381 [2024-11-27 07:28:07.365592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.381 [2024-11-27 07:28:07.368805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.381 [2024-11-27 07:28:07.368992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.381 [2024-11-27 07:28:07.369008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.381 [2024-11-27 07:28:07.372237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.381 [2024-11-27 07:28:07.372422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.381 [2024-11-27 07:28:07.372439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.381 [2024-11-27 07:28:07.376413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.381 [2024-11-27 07:28:07.376597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.381 [2024-11-27 07:28:07.376613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.381 [2024-11-27 07:28:07.382303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.381 [2024-11-27 07:28:07.382574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.381 [2024-11-27 07:28:07.382589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.381 [2024-11-27 07:28:07.388472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.381 [2024-11-27 07:28:07.388653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.381 [2024-11-27 07:28:07.388668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.381 [2024-11-27 07:28:07.394952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.381 [2024-11-27 07:28:07.395163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.381 [2024-11-27 07:28:07.395179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.381 [2024-11-27 07:28:07.400325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.381 [2024-11-27 07:28:07.400486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.381 [2024-11-27 07:28:07.400501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.381 [2024-11-27 07:28:07.407620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.381 [2024-11-27 07:28:07.407754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.381 [2024-11-27 07:28:07.407770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.381 [2024-11-27 07:28:07.415118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.381 [2024-11-27 07:28:07.415296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.381 [2024-11-27 07:28:07.415311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.381 [2024-11-27 07:28:07.422219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.381 [2024-11-27 07:28:07.422377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.381 [2024-11-27 07:28:07.422393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.381 [2024-11-27 07:28:07.428570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.381 [2024-11-27 07:28:07.428776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.381 [2024-11-27 07:28:07.428792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.381 [2024-11-27 07:28:07.435353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.381 [2024-11-27 07:28:07.435488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.381 [2024-11-27 07:28:07.435504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.381 [2024-11-27 07:28:07.442149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.381 [2024-11-27 07:28:07.442347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.381 [2024-11-27 07:28:07.442363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.381 [2024-11-27 07:28:07.447708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.381 [2024-11-27 07:28:07.447933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.381 [2024-11-27 07:28:07.447949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.381 [2024-11-27 07:28:07.452824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.381 [2024-11-27 07:28:07.453015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.381 [2024-11-27 07:28:07.453031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.381 [2024-11-27 07:28:07.458191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.381 [2024-11-27 07:28:07.458367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.381 [2024-11-27 07:28:07.458382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.381 7937.00 IOPS, 992.12 MiB/s [2024-11-27T06:28:07.586Z] [2024-11-27 07:28:07.465761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.381 [2024-11-27 07:28:07.465933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.381 [2024-11-27 07:28:07.465948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.381 [2024-11-27 07:28:07.470607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.381 [2024-11-27 07:28:07.470725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.381 [2024-11-27 07:28:07.470740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.381 [2024-11-27 07:28:07.473993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.381 [2024-11-27 07:28:07.474091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.381 [2024-11-27 07:28:07.474106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.381 [2024-11-27 07:28:07.477085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.381 [2024-11-27 07:28:07.477210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.381 [2024-11-27 07:28:07.477226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.381 [2024-11-27 07:28:07.480403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.381 [2024-11-27 07:28:07.480509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.381 [2024-11-27 07:28:07.480524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.381 [2024-11-27 07:28:07.483186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.381 [2024-11-27 07:28:07.483292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.381 [2024-11-27 07:28:07.483308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.381 [2024-11-27 07:28:07.485978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.381 [2024-11-27 07:28:07.486084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.381 [2024-11-27 07:28:07.486100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.381 [2024-11-27 07:28:07.488725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.382 [2024-11-27 07:28:07.488827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.382 [2024-11-27 07:28:07.488846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.382 [2024-11-27 07:28:07.491477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.382 [2024-11-27 07:28:07.491583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.382 [2024-11-27 07:28:07.491598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.382 [2024-11-27 07:28:07.494253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.382 [2024-11-27 07:28:07.494360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.382 [2024-11-27 07:28:07.494375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.382 [2024-11-27 07:28:07.497015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.382 [2024-11-27 07:28:07.497121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.382 [2024-11-27 07:28:07.497136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.382 [2024-11-27 07:28:07.499752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.382 [2024-11-27 07:28:07.499855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.382 [2024-11-27 07:28:07.499870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.382 [2024-11-27 07:28:07.502499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.382 [2024-11-27 07:28:07.502610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.382 [2024-11-27 07:28:07.502626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.382 [2024-11-27 07:28:07.505750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.382 [2024-11-27 07:28:07.505847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.382 [2024-11-27 07:28:07.505862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.382 [2024-11-27 07:28:07.510150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.382 [2024-11-27 07:28:07.510254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.382 [2024-11-27 07:28:07.510269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.382 [2024-11-27 07:28:07.516054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.382 [2024-11-27 07:28:07.516169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.382 [2024-11-27 07:28:07.516186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.382 [2024-11-27 07:28:07.520093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.382 [2024-11-27 07:28:07.520205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.382 [2024-11-27 07:28:07.520220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.382 [2024-11-27 07:28:07.522912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.382 [2024-11-27 07:28:07.523019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.382 [2024-11-27 07:28:07.523034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.382 [2024-11-27 07:28:07.526997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.382 [2024-11-27 07:28:07.527090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.382 [2024-11-27 07:28:07.527106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.382 [2024-11-27 07:28:07.530544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.382 [2024-11-27 07:28:07.530640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.382 [2024-11-27 07:28:07.530655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.382 [2024-11-27 07:28:07.534503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.382 [2024-11-27 07:28:07.534598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.382 [2024-11-27 07:28:07.534614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.382 [2024-11-27 07:28:07.538628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.382 [2024-11-27 07:28:07.538724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.382 [2024-11-27 07:28:07.538740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.382 [2024-11-27 07:28:07.541969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.382 [2024-11-27 07:28:07.542067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.382 [2024-11-27 07:28:07.542083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.382 [2024-11-27 07:28:07.545182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.382 [2024-11-27 07:28:07.545288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.382 [2024-11-27 07:28:07.545304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.382 [2024-11-27 07:28:07.548609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.382 [2024-11-27 07:28:07.548705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.382 [2024-11-27 07:28:07.548720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.382 [2024-11-27 07:28:07.552220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.382 [2024-11-27 07:28:07.552317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.382 [2024-11-27 07:28:07.552332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.382 [2024-11-27 07:28:07.555647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.382 [2024-11-27 07:28:07.555756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.382 [2024-11-27 07:28:07.555772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.382 [2024-11-27 07:28:07.559495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.382 [2024-11-27 07:28:07.559592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.382 [2024-11-27 07:28:07.559607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.382 [2024-11-27 07:28:07.562670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.382 [2024-11-27 07:28:07.562765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.382 [2024-11-27 07:28:07.562780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.382 [2024-11-27 07:28:07.566036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.382 [2024-11-27 07:28:07.566131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.382 [2024-11-27 07:28:07.566146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.382 [2024-11-27 07:28:07.569217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.382 [2024-11-27 07:28:07.569323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.382 [2024-11-27 07:28:07.569338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.382 [2024-11-27 07:28:07.573090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.382 [2024-11-27 07:28:07.573191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.382 [2024-11-27 07:28:07.573206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.382 [2024-11-27 07:28:07.577105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.383 [2024-11-27 07:28:07.577206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.383 [2024-11-27 07:28:07.577221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.646 [2024-11-27 07:28:07.581697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.646 [2024-11-27 07:28:07.581825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.646 [2024-11-27 07:28:07.581843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.646 [2024-11-27 07:28:07.586970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.646 [2024-11-27 07:28:07.587070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.646 [2024-11-27 07:28:07.587085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.646 [2024-11-27 07:28:07.592685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.646 [2024-11-27 07:28:07.592790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.646 [2024-11-27 07:28:07.592805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.646 [2024-11-27 07:28:07.596310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.646 [2024-11-27 07:28:07.596405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.646 [2024-11-27 07:28:07.596420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.646 [2024-11-27 07:28:07.599837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.646 [2024-11-27 07:28:07.599944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.646 [2024-11-27 07:28:07.599960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.646 [2024-11-27 07:28:07.603258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.646 [2024-11-27 07:28:07.603355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.646 [2024-11-27 07:28:07.603370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.646 [2024-11-27 07:28:07.607234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.646 [2024-11-27 07:28:07.607342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.646 [2024-11-27 07:28:07.607357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.646 [2024-11-27 07:28:07.610429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.646 [2024-11-27 07:28:07.610526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.646 [2024-11-27 07:28:07.610541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.646 [2024-11-27 07:28:07.613815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.646 [2024-11-27 07:28:07.613909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.646 [2024-11-27 07:28:07.613924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.646 [2024-11-27 07:28:07.617589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.646 [2024-11-27 07:28:07.617689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.646 [2024-11-27 07:28:07.617704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.646 [2024-11-27 07:28:07.621328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.646 [2024-11-27 07:28:07.621436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.646 [2024-11-27 07:28:07.621451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.646 [2024-11-27 07:28:07.624996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.646 [2024-11-27 07:28:07.625090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.646 [2024-11-27 07:28:07.625106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.646 [2024-11-27 07:28:07.628667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.646 [2024-11-27 07:28:07.628761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.646 [2024-11-27 07:28:07.628776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.646 [2024-11-27 07:28:07.634219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.646 [2024-11-27 07:28:07.634335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.646 [2024-11-27 07:28:07.634350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.646 [2024-11-27 07:28:07.638017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.646 [2024-11-27 07:28:07.638125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.646 [2024-11-27 07:28:07.638140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.646 [2024-11-27 07:28:07.640993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.646 [2024-11-27 07:28:07.641101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.646 [2024-11-27 07:28:07.641116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.646 [2024-11-27 07:28:07.643993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.647 [2024-11-27 07:28:07.644103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.647 [2024-11-27 07:28:07.644119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.647 [2024-11-27 07:28:07.647057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.647 [2024-11-27 07:28:07.647156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.647 [2024-11-27 07:28:07.647177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.647 [2024-11-27 07:28:07.649849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.647 [2024-11-27 07:28:07.649952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.647 [2024-11-27 07:28:07.649968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.647 [2024-11-27 07:28:07.652626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.647 [2024-11-27 07:28:07.652730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.647 [2024-11-27 07:28:07.652745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.647 [2024-11-27 07:28:07.655374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.647 [2024-11-27 07:28:07.655477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.647 [2024-11-27 07:28:07.655492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.647 [2024-11-27 07:28:07.658111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.647 [2024-11-27 07:28:07.658223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.647 [2024-11-27 07:28:07.658238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.647 [2024-11-27 07:28:07.660872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.647 [2024-11-27 07:28:07.660977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.647 [2024-11-27 07:28:07.660992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.647 [2024-11-27 07:28:07.664626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.647 [2024-11-27 07:28:07.664730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.647 [2024-11-27 07:28:07.664745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.647 [2024-11-27 07:28:07.667925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.647 [2024-11-27 07:28:07.668032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.647 [2024-11-27 07:28:07.668048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.647 [2024-11-27 07:28:07.671019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.647 [2024-11-27 07:28:07.671114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.647 [2024-11-27 07:28:07.671129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.647 [2024-11-27 07:28:07.675307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.647 [2024-11-27 07:28:07.675403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.647 [2024-11-27 07:28:07.675421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.647 [2024-11-27 07:28:07.678960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.647 [2024-11-27 07:28:07.679069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.647 [2024-11-27 07:28:07.679084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.647 [2024-11-27 07:28:07.681710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.647 [2024-11-27 07:28:07.681815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.647 [2024-11-27 07:28:07.681831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.647 [2024-11-27 07:28:07.684433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.647 [2024-11-27 07:28:07.684539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.647 [2024-11-27 07:28:07.684555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.647 [2024-11-27 07:28:07.687139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.647 [2024-11-27 07:28:07.687252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.647 [2024-11-27 07:28:07.687267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.647 [2024-11-27 07:28:07.689885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.647 [2024-11-27 07:28:07.689996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.647 [2024-11-27 07:28:07.690011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.647 [2024-11-27 07:28:07.692614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.647 [2024-11-27 07:28:07.692717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.647 [2024-11-27 07:28:07.692732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.647 [2024-11-27 07:28:07.695305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.647 [2024-11-27 07:28:07.695410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.647 [2024-11-27 07:28:07.695425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.647 [2024-11-27 07:28:07.698040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.647 [2024-11-27 07:28:07.698145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.647 [2024-11-27 07:28:07.698166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.647 [2024-11-27 07:28:07.700765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.648 [2024-11-27 07:28:07.700877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.648 [2024-11-27 07:28:07.700892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.648 [2024-11-27 07:28:07.703466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.648 [2024-11-27 07:28:07.703569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.648 [2024-11-27 07:28:07.703584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.648 [2024-11-27 07:28:07.706554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.648 [2024-11-27 07:28:07.706657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.648 [2024-11-27 07:28:07.706673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.648 [2024-11-27 07:28:07.709267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.648 [2024-11-27 07:28:07.709375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.648 [2024-11-27 07:28:07.709390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.648 [2024-11-27 07:28:07.711980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.648 [2024-11-27 07:28:07.712085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.648 [2024-11-27 07:28:07.712101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.648 [2024-11-27 07:28:07.714692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.648 [2024-11-27 07:28:07.714796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.648 [2024-11-27 07:28:07.714812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.648 [2024-11-27 07:28:07.717600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.648 [2024-11-27 07:28:07.717701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.648 [2024-11-27 07:28:07.717716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.648 [2024-11-27 07:28:07.722167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.648 [2024-11-27 07:28:07.722270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.648 [2024-11-27 07:28:07.722286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.648 [2024-11-27 07:28:07.727397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.648 [2024-11-27 07:28:07.727491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.648 [2024-11-27 07:28:07.727506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.648 [2024-11-27 07:28:07.730894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.648 [2024-11-27 07:28:07.730999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.648 [2024-11-27 07:28:07.731014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.648 [2024-11-27 07:28:07.734296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.648 [2024-11-27 07:28:07.734401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.648 [2024-11-27 07:28:07.734417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.648 [2024-11-27 07:28:07.738127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.648 [2024-11-27 07:28:07.738223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.648 [2024-11-27 07:28:07.738238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.648 [2024-11-27 07:28:07.741920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.648 [2024-11-27 07:28:07.742028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.648 [2024-11-27 07:28:07.742043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.648 [2024-11-27 07:28:07.745376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.648 [2024-11-27 07:28:07.745470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.648 [2024-11-27 07:28:07.745485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.648 [2024-11-27 07:28:07.748981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.648 [2024-11-27 07:28:07.749078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.648 [2024-11-27 07:28:07.749093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.648 [2024-11-27 07:28:07.754850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.648 [2024-11-27 07:28:07.754946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.648 [2024-11-27 07:28:07.754960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.648 [2024-11-27 07:28:07.758710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.648 [2024-11-27 07:28:07.758814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.648 [2024-11-27 07:28:07.758830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.648 [2024-11-27 07:28:07.762261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.648 [2024-11-27 07:28:07.762363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.648 [2024-11-27 07:28:07.762380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.648 [2024-11-27 07:28:07.767264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.648 [2024-11-27 07:28:07.767495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.648 [2024-11-27 07:28:07.767512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.648 [2024-11-27 07:28:07.773014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.648 [2024-11-27 07:28:07.773189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.649 [2024-11-27 07:28:07.773204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.649 [2024-11-27 07:28:07.779800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.649 [2024-11-27 07:28:07.779884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.649 [2024-11-27 07:28:07.779899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.649 [2024-11-27 07:28:07.784529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.649 [2024-11-27 07:28:07.784616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.649 [2024-11-27 07:28:07.784631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.649 [2024-11-27 07:28:07.787964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.649 [2024-11-27 07:28:07.788023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.649 [2024-11-27 07:28:07.788038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.649 [2024-11-27 07:28:07.790804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.649 [2024-11-27 07:28:07.790851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.649 [2024-11-27 07:28:07.790866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.649 [2024-11-27 07:28:07.793603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.649 [2024-11-27 07:28:07.793656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.649 [2024-11-27 07:28:07.793671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.649 [2024-11-27 07:28:07.796362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.649 [2024-11-27 07:28:07.796431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.649 [2024-11-27 07:28:07.796446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.649 [2024-11-27 07:28:07.799389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.649 [2024-11-27 07:28:07.799434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.649 [2024-11-27 07:28:07.799449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.649 [2024-11-27 07:28:07.802281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.649 [2024-11-27 07:28:07.802340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.649 [2024-11-27 07:28:07.802355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.649 [2024-11-27 07:28:07.805673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.649 [2024-11-27 07:28:07.805726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.649 [2024-11-27 07:28:07.805740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.649 [2024-11-27 07:28:07.809424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.649 [2024-11-27 07:28:07.809487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.649 [2024-11-27 07:28:07.809502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.649 [2024-11-27 07:28:07.813178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.649 [2024-11-27 07:28:07.813230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.649 [2024-11-27 07:28:07.813245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.649 [2024-11-27 07:28:07.816709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.649 [2024-11-27 07:28:07.816765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.649 [2024-11-27 07:28:07.816781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.649 [2024-11-27 07:28:07.821012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.649 [2024-11-27 07:28:07.821060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.649 [2024-11-27 07:28:07.821075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.649 [2024-11-27 07:28:07.825375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.649 [2024-11-27 07:28:07.825420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.649 [2024-11-27 07:28:07.825435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.649 [2024-11-27 07:28:07.829195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.649 [2024-11-27 07:28:07.829267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.649 [2024-11-27 07:28:07.829282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.649 [2024-11-27 07:28:07.833355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.649 [2024-11-27 07:28:07.833449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.649 [2024-11-27 07:28:07.833464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.649 [2024-11-27 07:28:07.837137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.649 [2024-11-27 07:28:07.837195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.649 [2024-11-27 07:28:07.837210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.649 [2024-11-27 07:28:07.840583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.649 [2024-11-27 07:28:07.840633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.649 [2024-11-27 07:28:07.840649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.649 [2024-11-27 07:28:07.843438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.649 [2024-11-27 07:28:07.843489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.650 [2024-11-27 07:28:07.843504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.650 [2024-11-27 07:28:07.846238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.650 [2024-11-27 07:28:07.846289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.650 [2024-11-27 07:28:07.846304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.914 [2024-11-27 07:28:07.849022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.914 [2024-11-27 07:28:07.849067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.914 [2024-11-27 07:28:07.849082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.914 [2024-11-27 07:28:07.851781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.914 [2024-11-27 07:28:07.851833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.914 [2024-11-27 07:28:07.851848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.914 [2024-11-27 07:28:07.854511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.915 [2024-11-27 07:28:07.854570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.915 [2024-11-27 07:28:07.854584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.915 [2024-11-27 07:28:07.857223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.915 [2024-11-27 07:28:07.857275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.915 [2024-11-27 07:28:07.857292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.915 [2024-11-27 07:28:07.859917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.915 [2024-11-27 07:28:07.859967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.915 [2024-11-27 07:28:07.859982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.915 [2024-11-27 07:28:07.862651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.915 [2024-11-27 07:28:07.862708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.915 [2024-11-27 07:28:07.862722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.915 [2024-11-27 07:28:07.865369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.915 [2024-11-27 07:28:07.865419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.915 [2024-11-27 07:28:07.865434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.915 [2024-11-27 07:28:07.868084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.915 [2024-11-27 07:28:07.868131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.915 [2024-11-27 07:28:07.868145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.915 [2024-11-27 07:28:07.870809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.915 [2024-11-27 07:28:07.870888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.915 [2024-11-27 07:28:07.870903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.915 [2024-11-27 07:28:07.873869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.915 [2024-11-27 07:28:07.873951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.915 [2024-11-27 07:28:07.873966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.915 [2024-11-27 07:28:07.878703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.915 [2024-11-27 07:28:07.878789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.915 [2024-11-27 07:28:07.878804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.915 [2024-11-27 07:28:07.883337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.915 [2024-11-27 07:28:07.883426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.915 [2024-11-27 07:28:07.883441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.915 [2024-11-27 07:28:07.887734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.915 [2024-11-27 07:28:07.887869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.915 [2024-11-27 07:28:07.887884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.915 [2024-11-27 07:28:07.894672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.915 [2024-11-27 07:28:07.894773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.915 [2024-11-27 07:28:07.894787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.915 [2024-11-27 07:28:07.901321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.915 [2024-11-27 07:28:07.901407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.915 [2024-11-27 07:28:07.901422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.915 [2024-11-27 07:28:07.908663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.915 [2024-11-27 07:28:07.908762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.915 [2024-11-27 07:28:07.908777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.915 [2024-11-27 07:28:07.913423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.915 [2024-11-27 07:28:07.913479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.915 [2024-11-27 07:28:07.913495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.915 [2024-11-27 07:28:07.917210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.915 [2024-11-27 07:28:07.917393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.915 [2024-11-27 07:28:07.917408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.915 [2024-11-27 07:28:07.920744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.915 [2024-11-27 07:28:07.920829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.915 [2024-11-27 07:28:07.920844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.915 [2024-11-27 07:28:07.924183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.915 [2024-11-27 07:28:07.924227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.915 [2024-11-27 07:28:07.924242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.915 [2024-11-27 07:28:07.927907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.915 [2024-11-27 07:28:07.927969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.915 [2024-11-27 07:28:07.927984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.915 [2024-11-27 07:28:07.931561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.915 [2024-11-27 07:28:07.931621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.916 [2024-11-27 07:28:07.931637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.916 [2024-11-27 07:28:07.935001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.916 [2024-11-27 07:28:07.935048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.916 [2024-11-27 07:28:07.935063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.916 [2024-11-27 07:28:07.938203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.916 [2024-11-27 07:28:07.938277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.916 [2024-11-27 07:28:07.938292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.916 [2024-11-27 07:28:07.941047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.916 [2024-11-27 07:28:07.941089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.916 [2024-11-27 07:28:07.941105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.916 [2024-11-27 07:28:07.944025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.916 [2024-11-27 07:28:07.944076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.916 [2024-11-27 07:28:07.944091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.916 [2024-11-27 07:28:07.947886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.916 [2024-11-27 07:28:07.947950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.916 [2024-11-27 07:28:07.947965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.916 [2024-11-27 07:28:07.951145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.916 [2024-11-27 07:28:07.951220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.916 [2024-11-27 07:28:07.951235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.916 [2024-11-27 07:28:07.954941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.916 [2024-11-27 07:28:07.955007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.916 [2024-11-27 07:28:07.955022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.916 [2024-11-27 07:28:07.959252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.916 [2024-11-27 07:28:07.959317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.916 [2024-11-27 07:28:07.959335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.916 [2024-11-27 07:28:07.964194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.916 [2024-11-27 07:28:07.964257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.916 [2024-11-27 07:28:07.964272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.916 [2024-11-27 07:28:07.968007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.916 [2024-11-27 07:28:07.968074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.916 [2024-11-27 07:28:07.968090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.916 [2024-11-27 07:28:07.971192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.916 [2024-11-27 07:28:07.971258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.916 [2024-11-27 07:28:07.971272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.916 [2024-11-27 07:28:07.974994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.916 [2024-11-27 07:28:07.975114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.916 [2024-11-27 07:28:07.975129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.916 [2024-11-27 07:28:07.980536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.916 [2024-11-27 07:28:07.980606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.916 [2024-11-27 07:28:07.980622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.916 [2024-11-27 07:28:07.984570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.916 [2024-11-27 07:28:07.984681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.916 [2024-11-27 07:28:07.984696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.916 [2024-11-27 07:28:07.987760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.916 [2024-11-27 07:28:07.987829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.916 [2024-11-27 07:28:07.987844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.916 [2024-11-27 07:28:07.991038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.916 [2024-11-27 07:28:07.991119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.916 [2024-11-27 07:28:07.991134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.916 [2024-11-27 07:28:07.994479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.916 [2024-11-27 07:28:07.994558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.916 [2024-11-27 07:28:07.994573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.916 [2024-11-27 07:28:07.997480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.916 [2024-11-27 07:28:07.997548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.916 [2024-11-27 07:28:07.997563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.916 [2024-11-27 07:28:08.000245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.916 [2024-11-27 07:28:08.000315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.916 [2024-11-27 07:28:08.000330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.917 [2024-11-27 07:28:08.002985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.917 [2024-11-27 07:28:08.003067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.917 [2024-11-27 07:28:08.003082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.917 [2024-11-27 07:28:08.005779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.917 [2024-11-27 07:28:08.005859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.917 [2024-11-27 07:28:08.005874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.917 [2024-11-27 07:28:08.008582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.917 [2024-11-27 07:28:08.008659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.917 [2024-11-27 07:28:08.008674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.917 [2024-11-27 07:28:08.011585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.917 [2024-11-27 07:28:08.011659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.917 [2024-11-27 07:28:08.011674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.917 [2024-11-27 07:28:08.014300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.917 [2024-11-27 07:28:08.014371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.917 [2024-11-27 07:28:08.014386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.917 [2024-11-27 07:28:08.017001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.917 [2024-11-27 07:28:08.017071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.917 [2024-11-27 07:28:08.017086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.917 [2024-11-27 07:28:08.019717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.917 [2024-11-27 07:28:08.019790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.917 [2024-11-27 07:28:08.019805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.917 [2024-11-27 07:28:08.022427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.917 [2024-11-27 07:28:08.022504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.917 [2024-11-27 07:28:08.022519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.917 [2024-11-27 07:28:08.025227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.917 [2024-11-27 07:28:08.025347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.917 [2024-11-27 07:28:08.025363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.917 [2024-11-27 07:28:08.028962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.917 [2024-11-27 07:28:08.029036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.917 [2024-11-27 07:28:08.029051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.917 [2024-11-27 07:28:08.033924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.917 [2024-11-27 07:28:08.034117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.917 [2024-11-27 07:28:08.034133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.917 [2024-11-27 07:28:08.037457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.917 [2024-11-27 07:28:08.037543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.917 [2024-11-27 07:28:08.037558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.917 [2024-11-27 07:28:08.040502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.917 [2024-11-27 07:28:08.040597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.917 [2024-11-27 07:28:08.040612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.917 [2024-11-27 07:28:08.043615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.917 [2024-11-27 07:28:08.043726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.917 [2024-11-27 07:28:08.043741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.917 [2024-11-27 07:28:08.047066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.917 [2024-11-27 07:28:08.047167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.917 [2024-11-27 07:28:08.047184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.917 [2024-11-27 07:28:08.051121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.917 [2024-11-27 07:28:08.051219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.917 [2024-11-27 07:28:08.051234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.917 [2024-11-27 07:28:08.054696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.917 [2024-11-27 07:28:08.054872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.917 [2024-11-27 07:28:08.054887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.917 [2024-11-27 07:28:08.060420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.917 [2024-11-27 07:28:08.060596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.917 [2024-11-27 07:28:08.060611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.917 [2024-11-27 07:28:08.064608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.917 [2024-11-27 07:28:08.064758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.917 [2024-11-27 07:28:08.064773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.917 [2024-11-27 07:28:08.069993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.918 [2024-11-27 07:28:08.070213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.918 [2024-11-27 07:28:08.070229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.918 [2024-11-27 07:28:08.076363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.918 [2024-11-27 07:28:08.076448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.918 [2024-11-27 07:28:08.076463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.918 [2024-11-27 07:28:08.083568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.918 [2024-11-27 07:28:08.083654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.918 [2024-11-27 07:28:08.083669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.918 [2024-11-27 07:28:08.090568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.918 [2024-11-27 07:28:08.090677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.918 [2024-11-27 07:28:08.090692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.918 [2024-11-27 07:28:08.095124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.918 [2024-11-27 07:28:08.095174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.918 [2024-11-27 07:28:08.095190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.918 [2024-11-27 07:28:08.098005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.918 [2024-11-27 07:28:08.098050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.918 [2024-11-27 07:28:08.098065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.918 [2024-11-27 07:28:08.100791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.918 [2024-11-27 07:28:08.100833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.918 [2024-11-27 07:28:08.100848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.918 [2024-11-27 07:28:08.103664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.918 [2024-11-27 07:28:08.103708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.918 [2024-11-27 07:28:08.103723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:56.918 [2024-11-27 07:28:08.106453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.918 [2024-11-27 07:28:08.106499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.918 [2024-11-27 07:28:08.106513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.918 [2024-11-27 07:28:08.109245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.918 [2024-11-27 07:28:08.109289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.918 [2024-11-27 07:28:08.109304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:56.918 [2024-11-27 07:28:08.112057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.918 [2024-11-27 07:28:08.112107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.918 [2024-11-27 07:28:08.112121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:56.918 [2024-11-27 07:28:08.114850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:56.918 [2024-11-27 07:28:08.114895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.918 [2024-11-27 07:28:08.114910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:57.181 [2024-11-27 07:28:08.117632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.181 [2024-11-27 07:28:08.117679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.181 [2024-11-27 07:28:08.117694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:57.181 [2024-11-27 07:28:08.120655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.181 [2024-11-27 07:28:08.120697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.181 [2024-11-27 07:28:08.120712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:57.181 [2024-11-27 07:28:08.124538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.181 [2024-11-27 07:28:08.124579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.181 [2024-11-27 07:28:08.124594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:57.181 [2024-11-27 07:28:08.128527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.181 [2024-11-27 07:28:08.128572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.181 [2024-11-27 07:28:08.128587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:57.181 [2024-11-27 07:28:08.131974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.181 [2024-11-27 07:28:08.132078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.181 [2024-11-27 07:28:08.132092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:57.181 [2024-11-27 07:28:08.136379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.181 [2024-11-27 07:28:08.136464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.181 [2024-11-27 07:28:08.136479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:57.181 [2024-11-27 07:28:08.142109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.181 [2024-11-27 07:28:08.142243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.181 [2024-11-27 07:28:08.142258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:57.181 [2024-11-27 07:28:08.146913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.181 [2024-11-27 07:28:08.147098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.181 [2024-11-27 07:28:08.147113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:57.181 [2024-11-27 07:28:08.153962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.181 [2024-11-27 07:28:08.154062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.181 [2024-11-27 07:28:08.154076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:57.181 [2024-11-27 07:28:08.157668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.181 [2024-11-27 07:28:08.157790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.181 [2024-11-27 07:28:08.157808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:57.181 [2024-11-27 07:28:08.160694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.181 [2024-11-27 07:28:08.160768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.181 [2024-11-27 07:28:08.160783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:57.181 [2024-11-27 07:28:08.163489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.181 [2024-11-27 07:28:08.163561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.181 [2024-11-27 07:28:08.163576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:57.181 [2024-11-27 07:28:08.166317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.181 [2024-11-27 07:28:08.166386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.181 [2024-11-27 07:28:08.166401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:57.181 [2024-11-27 07:28:08.169121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.181 [2024-11-27 07:28:08.169197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.181 [2024-11-27 07:28:08.169212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:57.181 [2024-11-27 07:28:08.171898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.181 [2024-11-27 07:28:08.171962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.181 [2024-11-27 07:28:08.171977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:57.181 [2024-11-27 07:28:08.174673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.181 [2024-11-27 07:28:08.174747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.181 [2024-11-27 07:28:08.174762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:57.181 [2024-11-27 07:28:08.177421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.181 [2024-11-27 07:28:08.177486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.181 [2024-11-27 07:28:08.177501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:57.181 [2024-11-27 07:28:08.180174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.181 [2024-11-27 07:28:08.180252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.181 [2024-11-27 07:28:08.180267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:57.181 [2024-11-27 07:28:08.182945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.181 [2024-11-27 07:28:08.183027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.181 [2024-11-27 07:28:08.183042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:57.181 [2024-11-27 07:28:08.185709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.181 [2024-11-27 07:28:08.185783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.181 [2024-11-27 07:28:08.185798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:57.181 [2024-11-27 07:28:08.188433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.181 [2024-11-27 07:28:08.188507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.182 [2024-11-27 07:28:08.188522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:57.182 [2024-11-27 07:28:08.191151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.182 [2024-11-27 07:28:08.191228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.182 [2024-11-27 07:28:08.191243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:57.182 [2024-11-27 07:28:08.193862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.182 [2024-11-27 07:28:08.193931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.182 [2024-11-27 07:28:08.193946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:57.182 [2024-11-27 07:28:08.196580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.182 [2024-11-27 07:28:08.196660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.182 [2024-11-27 07:28:08.196675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:57.182 [2024-11-27 07:28:08.199300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.182 [2024-11-27 07:28:08.199372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.182 [2024-11-27 07:28:08.199386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:57.182 [2024-11-27 07:28:08.202001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.182 [2024-11-27 07:28:08.202077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.182 [2024-11-27 07:28:08.202092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:57.182 [2024-11-27 07:28:08.206168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.182 [2024-11-27 07:28:08.206231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.182 [2024-11-27 07:28:08.206246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:57.182 [2024-11-27 07:28:08.211334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.182 [2024-11-27 07:28:08.211442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.182 [2024-11-27 07:28:08.211457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:57.182 [2024-11-27 07:28:08.217417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.182 [2024-11-27 07:28:08.217483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.182 [2024-11-27 07:28:08.217498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:57.182 [2024-11-27 07:28:08.222388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.182 [2024-11-27 07:28:08.222454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.182 [2024-11-27 07:28:08.222469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:57.182 [2024-11-27 07:28:08.225916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.182 [2024-11-27 07:28:08.225985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.182 [2024-11-27 07:28:08.226001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:57.182 [2024-11-27 07:28:08.229170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.182 [2024-11-27 07:28:08.229253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.182 [2024-11-27 07:28:08.229268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:57.182 [2024-11-27 07:28:08.232696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.182 [2024-11-27 07:28:08.232763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.182 [2024-11-27 07:28:08.232778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:57.182 [2024-11-27 07:28:08.235934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.182 [2024-11-27 07:28:08.236000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.182 [2024-11-27 07:28:08.236016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:57.182 [2024-11-27 07:28:08.239247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.182 [2024-11-27 07:28:08.239322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.182 [2024-11-27 07:28:08.239337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:57.182 [2024-11-27 07:28:08.242309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.182 [2024-11-27 07:28:08.242381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.182 [2024-11-27 07:28:08.242400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:57.182 [2024-11-27 07:28:08.245024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.182 [2024-11-27 07:28:08.245095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.182 [2024-11-27 07:28:08.245110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:57.182 [2024-11-27 07:28:08.247770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.182 [2024-11-27 07:28:08.247845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.182 [2024-11-27 07:28:08.247860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:57.182 [2024-11-27 07:28:08.250480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.182 [2024-11-27 07:28:08.250552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.182 [2024-11-27 07:28:08.250567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:57.182 [2024-11-27 07:28:08.253182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.182 [2024-11-27 07:28:08.253257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.182 [2024-11-27 07:28:08.253272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:57.182 [2024-11-27 07:28:08.255931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.182 [2024-11-27 07:28:08.256003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.182 [2024-11-27 07:28:08.256018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:57.182 [2024-11-27 07:28:08.258691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.182 [2024-11-27 07:28:08.258753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.182 [2024-11-27 07:28:08.258768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:57.182 [2024-11-27 07:28:08.262694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.182 [2024-11-27 07:28:08.262760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.182 [2024-11-27 07:28:08.262775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:57.182 [2024-11-27 07:28:08.266830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.182 [2024-11-27 07:28:08.266895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.182 [2024-11-27 07:28:08.266910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:57.182 [2024-11-27 07:28:08.270812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.182 [2024-11-27 07:28:08.270881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.182 [2024-11-27 07:28:08.270896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:57.182 [2024-11-27 07:28:08.275179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.182 [2024-11-27 07:28:08.275245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.182 [2024-11-27 07:28:08.275260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:57.182 [2024-11-27 07:28:08.279074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.182 [2024-11-27 07:28:08.279142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.182 [2024-11-27 07:28:08.279162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:57.182 [2024-11-27 07:28:08.282682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.182 [2024-11-27 07:28:08.282749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.183 [2024-11-27 07:28:08.282764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:57.183 [2024-11-27 07:28:08.286316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.183 [2024-11-27 07:28:08.286381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.183 [2024-11-27 07:28:08.286395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:57.183 [2024-11-27 07:28:08.289598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.183 [2024-11-27 07:28:08.289661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.183 [2024-11-27 07:28:08.289676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:57.183 [2024-11-27 07:28:08.293283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.183 [2024-11-27 07:28:08.293361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.183 [2024-11-27 07:28:08.293376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:57.183 [2024-11-27 07:28:08.296775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.183 [2024-11-27 07:28:08.296846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.183 [2024-11-27 07:28:08.296861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:57.183 [2024-11-27 07:28:08.299849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.183 [2024-11-27 07:28:08.299914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.183 [2024-11-27 07:28:08.299929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:57.183 [2024-11-27 07:28:08.302776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.183 [2024-11-27 07:28:08.302841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.183 [2024-11-27 07:28:08.302855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:57.183 [2024-11-27 07:28:08.305647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.183 [2024-11-27 07:28:08.305725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.183 [2024-11-27 07:28:08.305740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:57.183 [2024-11-27 07:28:08.308789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.183 [2024-11-27 07:28:08.308873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.183 [2024-11-27 07:28:08.308887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:57.183 [2024-11-27 07:28:08.311915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.183 [2024-11-27 07:28:08.311985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.183 [2024-11-27 07:28:08.312000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:57.183 [2024-11-27 07:28:08.314646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.183 [2024-11-27 07:28:08.314715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.183 [2024-11-27 07:28:08.314730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:57.183 [2024-11-27 07:28:08.317341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.183 [2024-11-27 07:28:08.317406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.183 [2024-11-27 07:28:08.317421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:57.183 [2024-11-27 07:28:08.320673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.183 [2024-11-27 07:28:08.320744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.183 [2024-11-27 07:28:08.320758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:57.183 [2024-11-27 07:28:08.324552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.183 [2024-11-27 07:28:08.324617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.183 [2024-11-27 07:28:08.324632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:57.183 [2024-11-27 07:28:08.329122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.183 [2024-11-27 07:28:08.329199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.183 [2024-11-27 07:28:08.329217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:57.183 [2024-11-27 07:28:08.332591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.183 [2024-11-27 07:28:08.332656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.183 [2024-11-27 07:28:08.332671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:57.183 [2024-11-27 07:28:08.335648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.183 [2024-11-27 07:28:08.335720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.183 [2024-11-27 07:28:08.335735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:57.183 [2024-11-27 07:28:08.338587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.183 [2024-11-27 07:28:08.338656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.183 [2024-11-27 07:28:08.338671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:57.183 [2024-11-27 07:28:08.341366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.183 [2024-11-27 07:28:08.341448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.183 [2024-11-27 07:28:08.341463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:57.183 [2024-11-27 07:28:08.344306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.183 [2024-11-27 07:28:08.344380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.183 [2024-11-27 07:28:08.344395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:57.183 [2024-11-27 07:28:08.347037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.183 [2024-11-27 07:28:08.347113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.183 [2024-11-27 07:28:08.347128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:57.183 [2024-11-27 07:28:08.349738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.183 [2024-11-27 07:28:08.349813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.183 [2024-11-27 07:28:08.349828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:57.183 [2024-11-27 07:28:08.352440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.183 [2024-11-27 07:28:08.352512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.183 [2024-11-27 07:28:08.352526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:57.183 [2024-11-27 07:28:08.355228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.183 [2024-11-27 07:28:08.355300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.183 [2024-11-27 07:28:08.355315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:57.183 [2024-11-27 07:28:08.357974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.183 [2024-11-27 07:28:08.358045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.183 [2024-11-27 07:28:08.358060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:57.183 [2024-11-27 07:28:08.361730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.183 [2024-11-27 07:28:08.361796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.183 [2024-11-27 07:28:08.361811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:57.183 [2024-11-27 07:28:08.366534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.183 [2024-11-27 07:28:08.366613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.183 [2024-11-27 07:28:08.366628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:57.183 [2024-11-27 07:28:08.370293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.183 [2024-11-27 07:28:08.370373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.184 [2024-11-27 07:28:08.370389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:57.184 [2024-11-27 07:28:08.374984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.184 [2024-11-27 07:28:08.375069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.184 [2024-11-27 07:28:08.375084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:57.184 [2024-11-27 07:28:08.380369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.184 [2024-11-27 07:28:08.380438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.184 [2024-11-27 07:28:08.380453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:57.445 [2024-11-27 07:28:08.384405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.445 [2024-11-27 07:28:08.384478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.445 [2024-11-27 07:28:08.384493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:57.445 [2024-11-27 07:28:08.388255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.445 [2024-11-27 07:28:08.388322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.445 [2024-11-27 07:28:08.388337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:57.445 [2024-11-27 07:28:08.391643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.445 [2024-11-27 07:28:08.391710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.445 [2024-11-27 07:28:08.391725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:57.445 [2024-11-27 07:28:08.396002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.445 [2024-11-27 07:28:08.396105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.445 [2024-11-27 07:28:08.396120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:57.445 [2024-11-27 07:28:08.401115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.445 [2024-11-27 07:28:08.401301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.445 [2024-11-27 07:28:08.401316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:57.445 [2024-11-27 07:28:08.408114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.445 [2024-11-27 07:28:08.408195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.445 [2024-11-27 07:28:08.408211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:57.445 [2024-11-27 07:28:08.415204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.445 [2024-11-27 07:28:08.415405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.445 [2024-11-27 07:28:08.415420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:57.445 [2024-11-27 07:28:08.422087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.445 [2024-11-27 07:28:08.422190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.445 [2024-11-27 07:28:08.422205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:57.445 [2024-11-27 07:28:08.428831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.445 [2024-11-27 07:28:08.428932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.445 [2024-11-27 07:28:08.428947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:57.445 [2024-11-27 07:28:08.436579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.445 [2024-11-27 07:28:08.436661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.445 [2024-11-27 07:28:08.436676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:57.445 [2024-11-27 07:28:08.444149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.445 [2024-11-27 07:28:08.444232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.445 [2024-11-27 07:28:08.444249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:57.445 [2024-11-27 07:28:08.451655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.445 [2024-11-27 07:28:08.451726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.445 [2024-11-27 07:28:08.451741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:57.445 [2024-11-27 07:28:08.459549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dee850) with pdu=0x200016eff3c8 00:32:57.445 [2024-11-27 07:28:08.459608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.446 [2024-11-27 07:28:08.459622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:57.446 8155.50 IOPS, 1019.44 MiB/s 00:32:57.446 Latency(us) 00:32:57.446 [2024-11-27T06:28:08.651Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:57.446 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:57.446 nvme0n1 : 2.00 8145.73 1018.22 0.00 0.00 1959.97 1058.13 8301.23 00:32:57.446 [2024-11-27T06:28:08.651Z] =================================================================================================================== 00:32:57.446 [2024-11-27T06:28:08.651Z] Total : 8145.73 1018.22 0.00 0.00 1959.97 1058.13 8301.23 00:32:57.446 { 00:32:57.446 "results": [ 00:32:57.446 { 00:32:57.446 "job": "nvme0n1", 00:32:57.446 "core_mask": "0x2", 00:32:57.446 "workload": "randwrite", 00:32:57.446 "status": "finished", 00:32:57.446 "queue_depth": 16, 00:32:57.446 "io_size": 131072, 00:32:57.446 "runtime": 2.004853, 00:32:57.446 "iops": 8145.734375537758, 00:32:57.446 "mibps": 1018.2167969422197, 00:32:57.446 "io_failed": 0, 00:32:57.446 "io_timeout": 0, 00:32:57.446 "avg_latency_us": 1959.9706537668649, 00:32:57.446 "min_latency_us": 1058.1333333333334, 00:32:57.446 "max_latency_us": 8301.226666666667 00:32:57.446 } 00:32:57.446 ], 00:32:57.446 "core_count": 1 00:32:57.446 } 00:32:57.446 07:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:57.446 07:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:57.446 07:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:57.446 | .driver_specific 00:32:57.446 | .nvme_error 00:32:57.446 | .status_code 00:32:57.446 | .command_transient_transport_error' 00:32:57.446 07:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:57.708 07:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 527 > 0 )) 00:32:57.708 07:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2573071 00:32:57.708 07:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2573071 ']' 00:32:57.708 07:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2573071 00:32:57.708 07:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:32:57.708 07:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:57.708 07:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2573071 00:32:57.708 07:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:57.708 07:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:57.708 07:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2573071' 00:32:57.708 killing process with pid 2573071 00:32:57.708 07:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2573071 00:32:57.708 Received shutdown signal, test time was about 2.000000 seconds 00:32:57.708 00:32:57.708 Latency(us) 00:32:57.708 [2024-11-27T06:28:08.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:57.708 [2024-11-27T06:28:08.913Z] =================================================================================================================== 00:32:57.708 [2024-11-27T06:28:08.913Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:57.708 07:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2573071 00:32:57.708 07:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2570668 00:32:57.708 07:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2570668 ']' 00:32:57.708 07:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2570668 00:32:57.708 07:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:32:57.708 07:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:57.708 07:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2570668 00:32:57.708 07:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:57.708 07:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:57.969 07:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2570668' 00:32:57.969 killing process with pid 2570668 00:32:57.969 07:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2570668 00:32:57.969 07:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2570668 00:32:57.969 00:32:57.969 real 0m16.525s 00:32:57.969 user 0m32.668s 00:32:57.969 sys 0m3.670s 00:32:57.969 07:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:57.969 07:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:57.969 ************************************ 00:32:57.969 END TEST nvmf_digest_error 00:32:57.969 ************************************ 00:32:57.969 07:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:32:57.969 07:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:32:57.969 07:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:57.969 07:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:32:57.969 07:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:57.969 07:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:32:57.969 07:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:57.969 07:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:57.969 rmmod nvme_tcp 00:32:57.969 rmmod nvme_fabrics 00:32:57.969 rmmod nvme_keyring 00:32:57.969 07:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:57.969 07:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:32:57.969 07:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:32:57.969 07:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2570668 ']' 00:32:57.969 07:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2570668 00:32:57.969 07:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2570668 ']' 00:32:57.969 07:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2570668 00:32:57.969 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2570668) - No such process 00:32:57.969 07:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2570668 is not found' 00:32:57.969 Process with pid 2570668 is not found 00:32:57.969 07:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:57.969 07:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:57.969 07:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:57.969 07:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:32:57.969 07:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:32:57.969 07:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:57.969 07:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:32:57.969 07:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:57.969 07:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:57.969 07:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:57.969 07:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:57.969 07:28:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:00.513 00:33:00.513 real 0m43.386s 00:33:00.513 user 1m7.816s 00:33:00.513 sys 0m13.316s 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:00.513 ************************************ 00:33:00.513 END TEST nvmf_digest 00:33:00.513 ************************************ 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.513 ************************************ 00:33:00.513 START TEST nvmf_bdevperf 00:33:00.513 ************************************ 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:00.513 * Looking for test storage... 00:33:00.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:00.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:00.513 --rc genhtml_branch_coverage=1 00:33:00.513 --rc genhtml_function_coverage=1 00:33:00.513 --rc genhtml_legend=1 00:33:00.513 --rc geninfo_all_blocks=1 00:33:00.513 --rc geninfo_unexecuted_blocks=1 00:33:00.513 00:33:00.513 ' 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:00.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:00.513 --rc genhtml_branch_coverage=1 00:33:00.513 --rc genhtml_function_coverage=1 00:33:00.513 --rc genhtml_legend=1 00:33:00.513 --rc geninfo_all_blocks=1 00:33:00.513 --rc geninfo_unexecuted_blocks=1 00:33:00.513 00:33:00.513 ' 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:00.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:00.513 --rc genhtml_branch_coverage=1 00:33:00.513 --rc genhtml_function_coverage=1 00:33:00.513 --rc genhtml_legend=1 00:33:00.513 --rc geninfo_all_blocks=1 00:33:00.513 --rc geninfo_unexecuted_blocks=1 00:33:00.513 00:33:00.513 ' 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:00.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:00.513 --rc genhtml_branch_coverage=1 00:33:00.513 --rc genhtml_function_coverage=1 00:33:00.513 --rc genhtml_legend=1 00:33:00.513 --rc geninfo_all_blocks=1 00:33:00.513 --rc geninfo_unexecuted_blocks=1 00:33:00.513 00:33:00.513 ' 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.513 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:33:00.514 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.514 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:33:00.514 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:00.514 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:00.514 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:00.514 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:00.514 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:00.514 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:00.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:00.514 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:00.514 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:00.514 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:00.514 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:00.514 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:00.514 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:00.514 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:00.514 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:00.514 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:00.514 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:00.514 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:00.514 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:00.514 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:00.514 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:00.514 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:00.514 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:00.514 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:33:00.514 07:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:08.662 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:08.662 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:08.662 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:08.662 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:08.662 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:08.663 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:08.663 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:08.663 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:08.663 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:08.663 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:08.663 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:08.663 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:08.663 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:08.663 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:08.663 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:08.663 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:08.663 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:08.663 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:08.663 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:08.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:08.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:33:08.663 00:33:08.663 --- 10.0.0.2 ping statistics --- 00:33:08.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:08.663 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:33:08.663 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:08.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:08.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:33:08.663 00:33:08.663 --- 10.0.0.1 ping statistics --- 00:33:08.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:08.663 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:33:08.663 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:08.663 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:33:08.663 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:08.663 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:08.663 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:08.663 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:08.663 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:08.663 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:08.663 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:08.663 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:08.663 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:08.663 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:08.663 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:08.663 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:08.663 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2578085 00:33:08.663 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2578085 00:33:08.663 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:08.663 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2578085 ']' 00:33:08.663 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:08.663 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:08.663 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:08.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:08.663 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:08.663 07:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:08.663 [2024-11-27 07:28:19.025647] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:33:08.663 [2024-11-27 07:28:19.025734] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:08.663 [2024-11-27 07:28:19.127808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:08.663 [2024-11-27 07:28:19.179998] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:08.663 [2024-11-27 07:28:19.180053] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:08.663 [2024-11-27 07:28:19.180062] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:08.663 [2024-11-27 07:28:19.180070] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:08.663 [2024-11-27 07:28:19.180077] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:08.663 [2024-11-27 07:28:19.181962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:08.663 [2024-11-27 07:28:19.182121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:08.663 [2024-11-27 07:28:19.182123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:08.663 07:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:08.663 07:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:33:08.663 07:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:08.663 07:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:08.663 07:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:08.924 07:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:08.924 07:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:08.924 07:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.924 07:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:08.924 [2024-11-27 07:28:19.903089] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:08.924 07:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.924 07:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:08.924 07:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.924 07:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:08.924 Malloc0 00:33:08.924 07:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.924 07:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:08.924 07:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.924 07:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:08.924 07:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.924 07:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:08.924 07:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.924 07:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:08.924 07:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.924 07:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:08.924 07:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.924 07:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:08.924 [2024-11-27 07:28:19.979510] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:08.924 07:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.924 07:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:08.924 07:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:08.924 07:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:33:08.924 07:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:33:08.924 07:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:08.924 07:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:08.924 { 00:33:08.924 "params": { 00:33:08.924 "name": "Nvme$subsystem", 00:33:08.924 "trtype": "$TEST_TRANSPORT", 00:33:08.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:08.924 "adrfam": "ipv4", 00:33:08.924 "trsvcid": "$NVMF_PORT", 00:33:08.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:08.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:08.924 "hdgst": ${hdgst:-false}, 00:33:08.924 "ddgst": ${ddgst:-false} 00:33:08.924 }, 00:33:08.924 "method": "bdev_nvme_attach_controller" 00:33:08.924 } 00:33:08.924 EOF 00:33:08.924 )") 00:33:08.924 07:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:33:08.924 07:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:33:08.924 07:28:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:33:08.924 07:28:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:08.924 "params": { 00:33:08.924 "name": "Nvme1", 00:33:08.924 "trtype": "tcp", 00:33:08.924 "traddr": "10.0.0.2", 00:33:08.924 "adrfam": "ipv4", 00:33:08.924 "trsvcid": "4420", 00:33:08.924 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:08.924 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:08.924 "hdgst": false, 00:33:08.924 "ddgst": false 00:33:08.924 }, 00:33:08.924 "method": "bdev_nvme_attach_controller" 00:33:08.924 }' 00:33:08.924 [2024-11-27 07:28:20.040893] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:33:08.924 [2024-11-27 07:28:20.040970] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2578300 ] 00:33:09.185 [2024-11-27 07:28:20.151609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:09.185 [2024-11-27 07:28:20.212676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:09.446 Running I/O for 1 seconds... 00:33:10.389 8608.00 IOPS, 33.62 MiB/s 00:33:10.389 Latency(us) 00:33:10.389 [2024-11-27T06:28:21.594Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:10.389 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:10.389 Verification LBA range: start 0x0 length 0x4000 00:33:10.389 Nvme1n1 : 1.01 8614.56 33.65 0.00 0.00 14791.04 3317.76 14636.37 00:33:10.389 [2024-11-27T06:28:21.594Z] =================================================================================================================== 00:33:10.389 [2024-11-27T06:28:21.594Z] Total : 8614.56 33.65 0.00 0.00 14791.04 3317.76 14636.37 00:33:10.651 07:28:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2578588 00:33:10.651 07:28:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:10.651 07:28:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:10.651 07:28:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:10.651 07:28:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:33:10.651 07:28:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:33:10.651 07:28:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:10.651 07:28:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:10.651 { 00:33:10.651 "params": { 00:33:10.651 "name": "Nvme$subsystem", 00:33:10.651 "trtype": "$TEST_TRANSPORT", 00:33:10.651 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:10.651 "adrfam": "ipv4", 00:33:10.651 "trsvcid": "$NVMF_PORT", 00:33:10.651 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:10.651 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:10.651 "hdgst": ${hdgst:-false}, 00:33:10.651 "ddgst": ${ddgst:-false} 00:33:10.651 }, 00:33:10.651 "method": "bdev_nvme_attach_controller" 00:33:10.651 } 00:33:10.651 EOF 00:33:10.651 )") 00:33:10.651 07:28:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:33:10.651 07:28:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:33:10.651 07:28:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:33:10.651 07:28:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:10.651 "params": { 00:33:10.651 "name": "Nvme1", 00:33:10.651 "trtype": "tcp", 00:33:10.651 "traddr": "10.0.0.2", 00:33:10.651 "adrfam": "ipv4", 00:33:10.651 "trsvcid": "4420", 00:33:10.651 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:10.651 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:10.651 "hdgst": false, 00:33:10.651 "ddgst": false 00:33:10.651 }, 00:33:10.651 "method": "bdev_nvme_attach_controller" 00:33:10.651 }' 00:33:10.651 [2024-11-27 07:28:21.719432] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:33:10.651 [2024-11-27 07:28:21.719512] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2578588 ] 00:33:10.651 [2024-11-27 07:28:21.810565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:10.911 [2024-11-27 07:28:21.862443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:10.911 Running I/O for 15 seconds... 00:33:13.233 11054.00 IOPS, 43.18 MiB/s [2024-11-27T06:28:24.706Z] 11159.00 IOPS, 43.59 MiB/s [2024-11-27T06:28:24.706Z] 07:28:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2578085 00:33:13.501 07:28:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:13.501 [2024-11-27 07:28:24.690917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.501 [2024-11-27 07:28:24.690958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.501 [2024-11-27 07:28:24.690977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.501 [2024-11-27 07:28:24.690987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.501 [2024-11-27 07:28:24.690998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.501 [2024-11-27 07:28:24.691006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.501 [2024-11-27 07:28:24.691018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.501 [2024-11-27 07:28:24.691027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.501 [2024-11-27 07:28:24.691039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.501 [2024-11-27 07:28:24.691050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.501 [2024-11-27 07:28:24.691063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.501 [2024-11-27 07:28:24.691072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.501 [2024-11-27 07:28:24.691082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.501 [2024-11-27 07:28:24.691090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.501 [2024-11-27 07:28:24.691100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.501 [2024-11-27 07:28:24.691108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.501 [2024-11-27 07:28:24.691118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.501 [2024-11-27 07:28:24.691126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.501 [2024-11-27 07:28:24.691136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.501 [2024-11-27 07:28:24.691145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.501 [2024-11-27 07:28:24.691162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.501 [2024-11-27 07:28:24.691174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.501 [2024-11-27 07:28:24.691186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.501 [2024-11-27 07:28:24.691200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.501 [2024-11-27 07:28:24.691211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.501 [2024-11-27 07:28:24.691219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.501 [2024-11-27 07:28:24.691229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.501 [2024-11-27 07:28:24.691237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.501 [2024-11-27 07:28:24.691250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.501 [2024-11-27 07:28:24.691259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.501 [2024-11-27 07:28:24.691269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.501 [2024-11-27 07:28:24.691278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.501 [2024-11-27 07:28:24.691290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.501 [2024-11-27 07:28:24.691298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.501 [2024-11-27 07:28:24.691308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.501 [2024-11-27 07:28:24.691317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.501 [2024-11-27 07:28:24.691327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.501 [2024-11-27 07:28:24.691337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.501 [2024-11-27 07:28:24.691350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.501 [2024-11-27 07:28:24.691361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.501 [2024-11-27 07:28:24.691371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.501 [2024-11-27 07:28:24.691381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.501 [2024-11-27 07:28:24.691393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.501 [2024-11-27 07:28:24.691403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.501 [2024-11-27 07:28:24.691415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.502 [2024-11-27 07:28:24.691425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.502 [2024-11-27 07:28:24.691438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.502 [2024-11-27 07:28:24.691447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.502 [2024-11-27 07:28:24.691462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.502 [2024-11-27 07:28:24.691473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.502 [2024-11-27 07:28:24.691482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.502 [2024-11-27 07:28:24.691491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.502 [2024-11-27 07:28:24.691501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.502 [2024-11-27 07:28:24.691509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.502 [2024-11-27 07:28:24.691519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.502 [2024-11-27 07:28:24.691527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.502 [2024-11-27 07:28:24.691537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.502 [2024-11-27 07:28:24.691544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.502 [2024-11-27 07:28:24.691556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.502 [2024-11-27 07:28:24.691564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.502 [2024-11-27 07:28:24.691575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.502 [2024-11-27 07:28:24.691583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.502 [2024-11-27 07:28:24.691593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.502 [2024-11-27 07:28:24.691602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.502 [2024-11-27 07:28:24.691611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.502 [2024-11-27 07:28:24.691620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.502 [2024-11-27 07:28:24.691630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.502 [2024-11-27 07:28:24.691639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.502 [2024-11-27 07:28:24.691652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.502 [2024-11-27 07:28:24.691661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.502 [2024-11-27 07:28:24.691670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.502 [2024-11-27 07:28:24.691678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.502 [2024-11-27 07:28:24.691687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.502 [2024-11-27 07:28:24.691698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.502 [2024-11-27 07:28:24.691710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.502 [2024-11-27 07:28:24.691718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.502 [2024-11-27 07:28:24.691727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.502 [2024-11-27 07:28:24.691737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.502 [2024-11-27 07:28:24.691748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.502 [2024-11-27 07:28:24.691758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.502 [2024-11-27 07:28:24.691768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.502 [2024-11-27 07:28:24.691776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.502 [2024-11-27 07:28:24.691786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.502 [2024-11-27 07:28:24.691793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.502 [2024-11-27 07:28:24.691803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.502 [2024-11-27 07:28:24.691811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.502 [2024-11-27 07:28:24.691821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.502 [2024-11-27 07:28:24.691832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.502 [2024-11-27 07:28:24.691848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.502 [2024-11-27 07:28:24.691856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.502 [2024-11-27 07:28:24.691867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.502 [2024-11-27 07:28:24.691876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.502 [2024-11-27 07:28:24.691887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.502 [2024-11-27 07:28:24.691898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.502 [2024-11-27 07:28:24.691912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.502 [2024-11-27 07:28:24.691922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.502 [2024-11-27 07:28:24.691932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.502 [2024-11-27 07:28:24.691940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.502 [2024-11-27 07:28:24.691957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.502 [2024-11-27 07:28:24.691967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.502 [2024-11-27 07:28:24.691979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.502 [2024-11-27 07:28:24.691987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.502 [2024-11-27 07:28:24.692002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.502 [2024-11-27 07:28:24.692013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.502 [2024-11-27 07:28:24.692028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.502 [2024-11-27 07:28:24.692037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.502 [2024-11-27 07:28:24.692052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:100112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.502 [2024-11-27 07:28:24.692063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.502 [2024-11-27 07:28:24.692076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.503 [2024-11-27 07:28:24.692087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.503 [2024-11-27 07:28:24.692105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.503 [2024-11-27 07:28:24.692121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.503 [2024-11-27 07:28:24.692137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.503 [2024-11-27 07:28:24.692150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.503 [2024-11-27 07:28:24.692173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:100144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.503 [2024-11-27 07:28:24.692184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.503 [2024-11-27 07:28:24.692201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.503 [2024-11-27 07:28:24.692213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.503 [2024-11-27 07:28:24.692225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.503 [2024-11-27 07:28:24.692235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.503 [2024-11-27 07:28:24.692244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.503 [2024-11-27 07:28:24.692252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.503 [2024-11-27 07:28:24.692261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:100176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.503 [2024-11-27 07:28:24.692268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.503 [2024-11-27 07:28:24.692279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:100184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.503 [2024-11-27 07:28:24.692287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.503 [2024-11-27 07:28:24.692296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.503 [2024-11-27 07:28:24.692303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.503 [2024-11-27 07:28:24.692312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.503 [2024-11-27 07:28:24.692320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.503 [2024-11-27 07:28:24.692330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.503 [2024-11-27 07:28:24.692338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.503 [2024-11-27 07:28:24.692348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.503 [2024-11-27 07:28:24.692355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.503 [2024-11-27 07:28:24.692365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.503 [2024-11-27 07:28:24.692372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.503 [2024-11-27 07:28:24.692382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:100224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.503 [2024-11-27 07:28:24.692389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.503 [2024-11-27 07:28:24.692399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.503 [2024-11-27 07:28:24.692406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.503 [2024-11-27 07:28:24.692416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:100240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.503 [2024-11-27 07:28:24.692423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.503 [2024-11-27 07:28:24.692432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.503 [2024-11-27 07:28:24.692440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.503 [2024-11-27 07:28:24.692449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:100256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.503 [2024-11-27 07:28:24.692457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.503 [2024-11-27 07:28:24.692466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:100264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.503 [2024-11-27 07:28:24.692473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.503 [2024-11-27 07:28:24.692482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.503 [2024-11-27 07:28:24.692495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.503 [2024-11-27 07:28:24.692505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.503 [2024-11-27 07:28:24.692513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.503 [2024-11-27 07:28:24.692523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.503 [2024-11-27 07:28:24.692531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.503 [2024-11-27 07:28:24.692540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.503 [2024-11-27 07:28:24.692547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.503 [2024-11-27 07:28:24.692557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.503 [2024-11-27 07:28:24.692564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.503 [2024-11-27 07:28:24.692574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.503 [2024-11-27 07:28:24.692582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.503 [2024-11-27 07:28:24.692591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.503 [2024-11-27 07:28:24.692598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.503 [2024-11-27 07:28:24.692608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.503 [2024-11-27 07:28:24.692615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.503 [2024-11-27 07:28:24.692625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.503 [2024-11-27 07:28:24.692632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.503 [2024-11-27 07:28:24.692641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.503 [2024-11-27 07:28:24.692648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.503 [2024-11-27 07:28:24.692658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.503 [2024-11-27 07:28:24.692666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.503 [2024-11-27 07:28:24.692675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.503 [2024-11-27 07:28:24.692682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.503 [2024-11-27 07:28:24.692691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.503 [2024-11-27 07:28:24.692699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.503 [2024-11-27 07:28:24.692710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.503 [2024-11-27 07:28:24.692717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.503 [2024-11-27 07:28:24.692726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.503 [2024-11-27 07:28:24.692734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.503 [2024-11-27 07:28:24.692743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.503 [2024-11-27 07:28:24.692750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.503 [2024-11-27 07:28:24.692759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.503 [2024-11-27 07:28:24.692767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.503 [2024-11-27 07:28:24.692776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.503 [2024-11-27 07:28:24.692784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.504 [2024-11-27 07:28:24.692793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.504 [2024-11-27 07:28:24.692801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.504 [2024-11-27 07:28:24.692810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.504 [2024-11-27 07:28:24.692817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.504 [2024-11-27 07:28:24.692826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.504 [2024-11-27 07:28:24.692834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.504 [2024-11-27 07:28:24.692843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.504 [2024-11-27 07:28:24.692850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.504 [2024-11-27 07:28:24.692860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.504 [2024-11-27 07:28:24.692867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.504 [2024-11-27 07:28:24.692877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:100456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.504 [2024-11-27 07:28:24.692885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.504 [2024-11-27 07:28:24.692895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.504 [2024-11-27 07:28:24.692902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.504 [2024-11-27 07:28:24.692911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.504 [2024-11-27 07:28:24.692920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.504 [2024-11-27 07:28:24.692930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.504 [2024-11-27 07:28:24.692938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.504 [2024-11-27 07:28:24.692947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.504 [2024-11-27 07:28:24.692954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.504 [2024-11-27 07:28:24.692964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.504 [2024-11-27 07:28:24.692971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.504 [2024-11-27 07:28:24.692980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:100504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.504 [2024-11-27 07:28:24.692988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.504 [2024-11-27 07:28:24.692999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:100512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.504 [2024-11-27 07:28:24.693006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.504 [2024-11-27 07:28:24.693015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.504 [2024-11-27 07:28:24.693023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.504 [2024-11-27 07:28:24.693032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:100528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.504 [2024-11-27 07:28:24.693040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.504 [2024-11-27 07:28:24.693049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:100536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.504 [2024-11-27 07:28:24.693057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.504 [2024-11-27 07:28:24.693066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.504 [2024-11-27 07:28:24.693073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.504 [2024-11-27 07:28:24.693083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.504 [2024-11-27 07:28:24.693090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.504 [2024-11-27 07:28:24.693100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.504 [2024-11-27 07:28:24.693107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.504 [2024-11-27 07:28:24.693117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:100568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.504 [2024-11-27 07:28:24.693124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.504 [2024-11-27 07:28:24.693135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:100576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.504 [2024-11-27 07:28:24.693142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.504 [2024-11-27 07:28:24.693152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.504 [2024-11-27 07:28:24.693163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.504 [2024-11-27 07:28:24.693172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:100592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.504 [2024-11-27 07:28:24.693180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.504 [2024-11-27 07:28:24.693189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:100600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.504 [2024-11-27 07:28:24.693196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.504 [2024-11-27 07:28:24.693206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.504 [2024-11-27 07:28:24.693213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.504 [2024-11-27 07:28:24.693222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:100616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.504 [2024-11-27 07:28:24.693230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.504 [2024-11-27 07:28:24.693239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:100624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.504 [2024-11-27 07:28:24.693246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.504 [2024-11-27 07:28:24.693256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:100632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.504 [2024-11-27 07:28:24.693263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.504 [2024-11-27 07:28:24.693272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.504 [2024-11-27 07:28:24.693280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.504 [2024-11-27 07:28:24.693289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.504 [2024-11-27 07:28:24.693296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.504 [2024-11-27 07:28:24.693306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.504 [2024-11-27 07:28:24.693314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.504 [2024-11-27 07:28:24.693323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.504 [2024-11-27 07:28:24.693330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.504 [2024-11-27 07:28:24.693340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:100672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.504 [2024-11-27 07:28:24.693349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.504 [2024-11-27 07:28:24.693358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.504 [2024-11-27 07:28:24.693365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.504 [2024-11-27 07:28:24.693374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:100688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.504 [2024-11-27 07:28:24.693382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.504 [2024-11-27 07:28:24.693391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f170 is same with the state(6) to be set 00:33:13.504 [2024-11-27 07:28:24.693401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:13.504 [2024-11-27 07:28:24.693407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:13.504 [2024-11-27 07:28:24.693413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100696 len:8 PRP1 0x0 PRP2 0x0 00:33:13.504 [2024-11-27 07:28:24.693421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.505 [2024-11-27 07:28:24.697023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:13.505 [2024-11-27 07:28:24.697075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:13.833 [2024-11-27 07:28:24.697887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.833 [2024-11-27 07:28:24.697905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:13.833 [2024-11-27 07:28:24.697913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:13.833 [2024-11-27 07:28:24.698136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:13.833 [2024-11-27 07:28:24.698369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:13.833 [2024-11-27 07:28:24.698380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:13.833 [2024-11-27 07:28:24.698390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:13.834 [2024-11-27 07:28:24.698398] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:13.834 [2024-11-27 07:28:24.711229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:13.834 [2024-11-27 07:28:24.711760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.834 [2024-11-27 07:28:24.711800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:13.834 [2024-11-27 07:28:24.711812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:13.834 [2024-11-27 07:28:24.712053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:13.834 [2024-11-27 07:28:24.712287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:13.834 [2024-11-27 07:28:24.712297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:13.834 [2024-11-27 07:28:24.712305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:13.834 [2024-11-27 07:28:24.712314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:13.834 [2024-11-27 07:28:24.725195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:13.834 [2024-11-27 07:28:24.725890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.834 [2024-11-27 07:28:24.725930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:13.834 [2024-11-27 07:28:24.725941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:13.834 [2024-11-27 07:28:24.726194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:13.834 [2024-11-27 07:28:24.726420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:13.834 [2024-11-27 07:28:24.726428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:13.834 [2024-11-27 07:28:24.726436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:13.834 [2024-11-27 07:28:24.726445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:13.834 [2024-11-27 07:28:24.739076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:13.834 [2024-11-27 07:28:24.739748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.834 [2024-11-27 07:28:24.739790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:13.834 [2024-11-27 07:28:24.739802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:13.834 [2024-11-27 07:28:24.740043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:13.834 [2024-11-27 07:28:24.740279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:13.834 [2024-11-27 07:28:24.740289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:13.834 [2024-11-27 07:28:24.740297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:13.834 [2024-11-27 07:28:24.740305] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:13.834 [2024-11-27 07:28:24.752936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:13.834 [2024-11-27 07:28:24.753747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.834 [2024-11-27 07:28:24.753790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:13.834 [2024-11-27 07:28:24.753802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:13.834 [2024-11-27 07:28:24.754044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:13.834 [2024-11-27 07:28:24.754276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:13.834 [2024-11-27 07:28:24.754286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:13.834 [2024-11-27 07:28:24.754294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:13.834 [2024-11-27 07:28:24.754302] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:13.834 [2024-11-27 07:28:24.766934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:13.834 [2024-11-27 07:28:24.767591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.834 [2024-11-27 07:28:24.767641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:13.834 [2024-11-27 07:28:24.767653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:13.834 [2024-11-27 07:28:24.767897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:13.834 [2024-11-27 07:28:24.768122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:13.834 [2024-11-27 07:28:24.768131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:13.834 [2024-11-27 07:28:24.768140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:13.834 [2024-11-27 07:28:24.768148] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:13.834 [2024-11-27 07:28:24.780797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:13.834 [2024-11-27 07:28:24.781504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.834 [2024-11-27 07:28:24.781551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:13.834 [2024-11-27 07:28:24.781564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:13.834 [2024-11-27 07:28:24.781809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:13.834 [2024-11-27 07:28:24.782035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:13.834 [2024-11-27 07:28:24.782044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:13.834 [2024-11-27 07:28:24.782052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:13.834 [2024-11-27 07:28:24.782060] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:13.834 [2024-11-27 07:28:24.794708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:13.834 [2024-11-27 07:28:24.795441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.834 [2024-11-27 07:28:24.795490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:13.834 [2024-11-27 07:28:24.795502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:13.834 [2024-11-27 07:28:24.795749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:13.834 [2024-11-27 07:28:24.795975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:13.834 [2024-11-27 07:28:24.795984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:13.834 [2024-11-27 07:28:24.795992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:13.834 [2024-11-27 07:28:24.796000] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:13.834 [2024-11-27 07:28:24.808655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:13.834 [2024-11-27 07:28:24.809270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.834 [2024-11-27 07:28:24.809321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:13.834 [2024-11-27 07:28:24.809335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:13.834 [2024-11-27 07:28:24.809591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:13.834 [2024-11-27 07:28:24.809818] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:13.834 [2024-11-27 07:28:24.809827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:13.834 [2024-11-27 07:28:24.809835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:13.834 [2024-11-27 07:28:24.809844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:13.834 [2024-11-27 07:28:24.822501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:13.834 [2024-11-27 07:28:24.823201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.834 [2024-11-27 07:28:24.823260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:13.834 [2024-11-27 07:28:24.823273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:13.834 [2024-11-27 07:28:24.823526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:13.834 [2024-11-27 07:28:24.823753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:13.834 [2024-11-27 07:28:24.823762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:13.834 [2024-11-27 07:28:24.823771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:13.834 [2024-11-27 07:28:24.823779] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:13.834 [2024-11-27 07:28:24.836469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:13.834 [2024-11-27 07:28:24.837096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.834 [2024-11-27 07:28:24.837124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:13.834 [2024-11-27 07:28:24.837133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:13.834 [2024-11-27 07:28:24.837366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:13.834 [2024-11-27 07:28:24.837592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:13.834 [2024-11-27 07:28:24.837601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:13.835 [2024-11-27 07:28:24.837610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:13.835 [2024-11-27 07:28:24.837618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:13.835 [2024-11-27 07:28:24.850501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:13.835 [2024-11-27 07:28:24.851199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.835 [2024-11-27 07:28:24.851263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:13.835 [2024-11-27 07:28:24.851276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:13.835 [2024-11-27 07:28:24.851533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:13.835 [2024-11-27 07:28:24.851762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:13.835 [2024-11-27 07:28:24.851780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:13.835 [2024-11-27 07:28:24.851790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:13.835 [2024-11-27 07:28:24.851801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:13.835 [2024-11-27 07:28:24.864495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:13.835 [2024-11-27 07:28:24.865216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.835 [2024-11-27 07:28:24.865280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:13.835 [2024-11-27 07:28:24.865294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:13.835 [2024-11-27 07:28:24.865552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:13.835 [2024-11-27 07:28:24.865780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:13.835 [2024-11-27 07:28:24.865790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:13.835 [2024-11-27 07:28:24.865799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:13.835 [2024-11-27 07:28:24.865809] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:13.835 [2024-11-27 07:28:24.878499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:13.835 [2024-11-27 07:28:24.879127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.835 [2024-11-27 07:28:24.879157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:13.835 [2024-11-27 07:28:24.879181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:13.835 [2024-11-27 07:28:24.879406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:13.835 [2024-11-27 07:28:24.879647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:13.835 [2024-11-27 07:28:24.879657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:13.835 [2024-11-27 07:28:24.879664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:13.835 [2024-11-27 07:28:24.879672] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:13.835 [2024-11-27 07:28:24.892545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:13.835 [2024-11-27 07:28:24.893122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.835 [2024-11-27 07:28:24.893148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:13.835 [2024-11-27 07:28:24.893157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:13.835 [2024-11-27 07:28:24.893390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:13.835 [2024-11-27 07:28:24.893613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:13.835 [2024-11-27 07:28:24.893623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:13.835 [2024-11-27 07:28:24.893631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:13.835 [2024-11-27 07:28:24.893638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:13.835 [2024-11-27 07:28:24.906514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:13.835 [2024-11-27 07:28:24.907111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.835 [2024-11-27 07:28:24.907136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:13.835 [2024-11-27 07:28:24.907144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:13.835 [2024-11-27 07:28:24.907373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:13.835 [2024-11-27 07:28:24.907596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:13.835 [2024-11-27 07:28:24.907608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:13.835 [2024-11-27 07:28:24.907621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:13.835 [2024-11-27 07:28:24.907630] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:13.835 [2024-11-27 07:28:24.920503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:13.835 [2024-11-27 07:28:24.921079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.835 [2024-11-27 07:28:24.921139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:13.835 [2024-11-27 07:28:24.921152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:13.835 [2024-11-27 07:28:24.921421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:13.835 [2024-11-27 07:28:24.921650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:13.835 [2024-11-27 07:28:24.921660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:13.835 [2024-11-27 07:28:24.921668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:13.835 [2024-11-27 07:28:24.921677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:13.835 [2024-11-27 07:28:24.934378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:13.835 [2024-11-27 07:28:24.935111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.835 [2024-11-27 07:28:24.935187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:13.835 [2024-11-27 07:28:24.935201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:13.835 [2024-11-27 07:28:24.935457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:13.835 [2024-11-27 07:28:24.935685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:13.835 [2024-11-27 07:28:24.935694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:13.835 [2024-11-27 07:28:24.935703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:13.835 [2024-11-27 07:28:24.935712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:13.835 [2024-11-27 07:28:24.948391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:13.835 [2024-11-27 07:28:24.949042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.835 [2024-11-27 07:28:24.949081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:13.835 [2024-11-27 07:28:24.949092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:13.835 [2024-11-27 07:28:24.949328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:13.835 [2024-11-27 07:28:24.949552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:13.835 [2024-11-27 07:28:24.949569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:13.835 [2024-11-27 07:28:24.949579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:13.835 [2024-11-27 07:28:24.949590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:13.835 [2024-11-27 07:28:24.962262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:13.835 [2024-11-27 07:28:24.962928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.835 [2024-11-27 07:28:24.962991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:13.835 [2024-11-27 07:28:24.963005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:13.835 [2024-11-27 07:28:24.963276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:13.835 [2024-11-27 07:28:24.963506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:13.835 [2024-11-27 07:28:24.963516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:13.835 [2024-11-27 07:28:24.963527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:13.835 [2024-11-27 07:28:24.963537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:13.835 [2024-11-27 07:28:24.976253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:13.835 [2024-11-27 07:28:24.976935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.835 [2024-11-27 07:28:24.976997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:13.835 [2024-11-27 07:28:24.977011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:13.835 [2024-11-27 07:28:24.977281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:13.835 [2024-11-27 07:28:24.977512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:13.836 [2024-11-27 07:28:24.977525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:13.836 [2024-11-27 07:28:24.977537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:13.836 [2024-11-27 07:28:24.977548] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:13.836 [2024-11-27 07:28:24.990098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:13.836 [2024-11-27 07:28:24.990745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.836 [2024-11-27 07:28:24.990772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:13.836 [2024-11-27 07:28:24.990780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:13.836 [2024-11-27 07:28:24.991011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:13.836 [2024-11-27 07:28:24.991240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:13.836 [2024-11-27 07:28:24.991251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:13.836 [2024-11-27 07:28:24.991258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:13.836 [2024-11-27 07:28:24.991266] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:13.836 [2024-11-27 07:28:25.004120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:13.836 [2024-11-27 07:28:25.004772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.836 [2024-11-27 07:28:25.004826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:13.836 [2024-11-27 07:28:25.004839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:13.836 [2024-11-27 07:28:25.005090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:13.836 [2024-11-27 07:28:25.005330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:13.836 [2024-11-27 07:28:25.005341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:13.836 [2024-11-27 07:28:25.005350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:13.836 [2024-11-27 07:28:25.005359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:13.836 [2024-11-27 07:28:25.018019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:13.836 [2024-11-27 07:28:25.018508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.836 [2024-11-27 07:28:25.018534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:13.836 [2024-11-27 07:28:25.018543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:13.836 [2024-11-27 07:28:25.018765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:13.836 [2024-11-27 07:28:25.018987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:13.836 [2024-11-27 07:28:25.018996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:13.836 [2024-11-27 07:28:25.019003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:13.836 [2024-11-27 07:28:25.019010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:13.836 [2024-11-27 07:28:25.031868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:13.836 [2024-11-27 07:28:25.032475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.836 [2024-11-27 07:28:25.032524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:13.836 [2024-11-27 07:28:25.032537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:13.836 [2024-11-27 07:28:25.032784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:13.836 [2024-11-27 07:28:25.033010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:13.836 [2024-11-27 07:28:25.033020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:13.836 [2024-11-27 07:28:25.033033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:13.836 [2024-11-27 07:28:25.033042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.144 [2024-11-27 07:28:25.045707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.144 [2024-11-27 07:28:25.046336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.144 [2024-11-27 07:28:25.046385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.144 [2024-11-27 07:28:25.046398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.144 [2024-11-27 07:28:25.046646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.144 [2024-11-27 07:28:25.046872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.144 [2024-11-27 07:28:25.046882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.144 [2024-11-27 07:28:25.046891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.144 [2024-11-27 07:28:25.046899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.144 [2024-11-27 07:28:25.059554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.144 [2024-11-27 07:28:25.060237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.144 [2024-11-27 07:28:25.060287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.144 [2024-11-27 07:28:25.060300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.144 [2024-11-27 07:28:25.060549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.144 [2024-11-27 07:28:25.060774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.144 [2024-11-27 07:28:25.060784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.144 [2024-11-27 07:28:25.060792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.145 [2024-11-27 07:28:25.060801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.145 [2024-11-27 07:28:25.073454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.145 [2024-11-27 07:28:25.074130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.145 [2024-11-27 07:28:25.074186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.145 [2024-11-27 07:28:25.074199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.145 [2024-11-27 07:28:25.074445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.145 [2024-11-27 07:28:25.074670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.145 [2024-11-27 07:28:25.074679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.145 [2024-11-27 07:28:25.074687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.145 [2024-11-27 07:28:25.074696] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.145 9626.33 IOPS, 37.60 MiB/s [2024-11-27T06:28:25.350Z] [2024-11-27 07:28:25.089014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.145 [2024-11-27 07:28:25.089590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.145 [2024-11-27 07:28:25.089636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.145 [2024-11-27 07:28:25.089647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.145 [2024-11-27 07:28:25.089891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.145 [2024-11-27 07:28:25.090117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.145 [2024-11-27 07:28:25.090126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.145 [2024-11-27 07:28:25.090134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.145 [2024-11-27 07:28:25.090142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.145 [2024-11-27 07:28:25.103008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.145 [2024-11-27 07:28:25.103622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.145 [2024-11-27 07:28:25.103644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.145 [2024-11-27 07:28:25.103653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.145 [2024-11-27 07:28:25.103874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.145 [2024-11-27 07:28:25.104095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.145 [2024-11-27 07:28:25.104103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.145 [2024-11-27 07:28:25.104110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.145 [2024-11-27 07:28:25.104117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.145 [2024-11-27 07:28:25.116957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.145 [2024-11-27 07:28:25.117605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.145 [2024-11-27 07:28:25.117652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.145 [2024-11-27 07:28:25.117664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.145 [2024-11-27 07:28:25.117910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.145 [2024-11-27 07:28:25.118135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.145 [2024-11-27 07:28:25.118144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.145 [2024-11-27 07:28:25.118152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.145 [2024-11-27 07:28:25.118171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.145 [2024-11-27 07:28:25.130844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.145 [2024-11-27 07:28:25.131442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.145 [2024-11-27 07:28:25.131501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.145 [2024-11-27 07:28:25.131514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.145 [2024-11-27 07:28:25.131760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.145 [2024-11-27 07:28:25.131986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.145 [2024-11-27 07:28:25.131994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.145 [2024-11-27 07:28:25.132002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.145 [2024-11-27 07:28:25.132010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.145 [2024-11-27 07:28:25.144681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.145 [2024-11-27 07:28:25.145404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.145 [2024-11-27 07:28:25.145458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.145 [2024-11-27 07:28:25.145471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.145 [2024-11-27 07:28:25.145721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.145 [2024-11-27 07:28:25.145948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.145 [2024-11-27 07:28:25.145958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.145 [2024-11-27 07:28:25.145966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.145 [2024-11-27 07:28:25.145974] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.145 [2024-11-27 07:28:25.158653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.145 [2024-11-27 07:28:25.159427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.145 [2024-11-27 07:28:25.159487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.145 [2024-11-27 07:28:25.159499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.145 [2024-11-27 07:28:25.159752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.145 [2024-11-27 07:28:25.159978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.145 [2024-11-27 07:28:25.159988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.145 [2024-11-27 07:28:25.159996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.145 [2024-11-27 07:28:25.160006] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.145 [2024-11-27 07:28:25.172678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.145 [2024-11-27 07:28:25.173300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.145 [2024-11-27 07:28:25.173363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.145 [2024-11-27 07:28:25.173377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.145 [2024-11-27 07:28:25.173642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.145 [2024-11-27 07:28:25.173870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.145 [2024-11-27 07:28:25.173880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.145 [2024-11-27 07:28:25.173889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.145 [2024-11-27 07:28:25.173898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.145 [2024-11-27 07:28:25.186599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.145 [2024-11-27 07:28:25.187262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.145 [2024-11-27 07:28:25.187309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.145 [2024-11-27 07:28:25.187320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.145 [2024-11-27 07:28:25.187561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.145 [2024-11-27 07:28:25.187786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.145 [2024-11-27 07:28:25.187796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.145 [2024-11-27 07:28:25.187803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.145 [2024-11-27 07:28:25.187812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.146 [2024-11-27 07:28:25.200486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.146 [2024-11-27 07:28:25.201084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.146 [2024-11-27 07:28:25.201110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.146 [2024-11-27 07:28:25.201121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.146 [2024-11-27 07:28:25.201353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.146 [2024-11-27 07:28:25.201577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.146 [2024-11-27 07:28:25.201587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.146 [2024-11-27 07:28:25.201594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.146 [2024-11-27 07:28:25.201602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.146 [2024-11-27 07:28:25.214464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.146 [2024-11-27 07:28:25.215177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.146 [2024-11-27 07:28:25.215239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.146 [2024-11-27 07:28:25.215253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.146 [2024-11-27 07:28:25.215508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.146 [2024-11-27 07:28:25.215739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.146 [2024-11-27 07:28:25.215756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.146 [2024-11-27 07:28:25.215765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.146 [2024-11-27 07:28:25.215773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.146 [2024-11-27 07:28:25.228471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.146 [2024-11-27 07:28:25.229223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.146 [2024-11-27 07:28:25.229282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.146 [2024-11-27 07:28:25.229295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.146 [2024-11-27 07:28:25.229550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.146 [2024-11-27 07:28:25.229777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.146 [2024-11-27 07:28:25.229788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.146 [2024-11-27 07:28:25.229796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.146 [2024-11-27 07:28:25.229805] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.146 [2024-11-27 07:28:25.242462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.146 [2024-11-27 07:28:25.243133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.146 [2024-11-27 07:28:25.243197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.146 [2024-11-27 07:28:25.243211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.146 [2024-11-27 07:28:25.243462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.146 [2024-11-27 07:28:25.243688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.146 [2024-11-27 07:28:25.243698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.146 [2024-11-27 07:28:25.243706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.146 [2024-11-27 07:28:25.243715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.146 [2024-11-27 07:28:25.256378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.146 [2024-11-27 07:28:25.257093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.146 [2024-11-27 07:28:25.257147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.146 [2024-11-27 07:28:25.257172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.146 [2024-11-27 07:28:25.257423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.146 [2024-11-27 07:28:25.257649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.146 [2024-11-27 07:28:25.257658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.146 [2024-11-27 07:28:25.257666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.146 [2024-11-27 07:28:25.257675] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.146 [2024-11-27 07:28:25.270333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.146 [2024-11-27 07:28:25.270927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.146 [2024-11-27 07:28:25.270979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.146 [2024-11-27 07:28:25.270991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.146 [2024-11-27 07:28:25.271250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.146 [2024-11-27 07:28:25.271477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.146 [2024-11-27 07:28:25.271487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.146 [2024-11-27 07:28:25.271495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.146 [2024-11-27 07:28:25.271504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.146 [2024-11-27 07:28:25.284363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.146 [2024-11-27 07:28:25.284977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.146 [2024-11-27 07:28:25.285001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.146 [2024-11-27 07:28:25.285009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.146 [2024-11-27 07:28:25.285240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.146 [2024-11-27 07:28:25.285464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.146 [2024-11-27 07:28:25.285473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.146 [2024-11-27 07:28:25.285481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.146 [2024-11-27 07:28:25.285488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.146 [2024-11-27 07:28:25.298309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.146 [2024-11-27 07:28:25.298877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.146 [2024-11-27 07:28:25.298926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.146 [2024-11-27 07:28:25.298940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.146 [2024-11-27 07:28:25.299197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.146 [2024-11-27 07:28:25.299424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.146 [2024-11-27 07:28:25.299435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.146 [2024-11-27 07:28:25.299443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.146 [2024-11-27 07:28:25.299452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.146 [2024-11-27 07:28:25.312164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.146 [2024-11-27 07:28:25.312854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.146 [2024-11-27 07:28:25.312907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.146 [2024-11-27 07:28:25.312919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.146 [2024-11-27 07:28:25.313178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.146 [2024-11-27 07:28:25.313405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.146 [2024-11-27 07:28:25.313413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.146 [2024-11-27 07:28:25.313422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.146 [2024-11-27 07:28:25.313430] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.146 [2024-11-27 07:28:25.326076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.146 [2024-11-27 07:28:25.326664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.146 [2024-11-27 07:28:25.326687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.146 [2024-11-27 07:28:25.326696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.146 [2024-11-27 07:28:25.326918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.146 [2024-11-27 07:28:25.327138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.146 [2024-11-27 07:28:25.327148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.147 [2024-11-27 07:28:25.327155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.147 [2024-11-27 07:28:25.327169] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.434 [2024-11-27 07:28:25.340011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.434 [2024-11-27 07:28:25.340593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.434 [2024-11-27 07:28:25.340612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.434 [2024-11-27 07:28:25.340621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.434 [2024-11-27 07:28:25.340842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.434 [2024-11-27 07:28:25.341062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.434 [2024-11-27 07:28:25.341071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.434 [2024-11-27 07:28:25.341078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.434 [2024-11-27 07:28:25.341086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.434 [2024-11-27 07:28:25.353929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.434 [2024-11-27 07:28:25.354564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.434 [2024-11-27 07:28:25.354607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.434 [2024-11-27 07:28:25.354619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.434 [2024-11-27 07:28:25.354866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.434 [2024-11-27 07:28:25.355092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.434 [2024-11-27 07:28:25.355100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.434 [2024-11-27 07:28:25.355108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.434 [2024-11-27 07:28:25.355116] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.434 [2024-11-27 07:28:25.367761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.434 [2024-11-27 07:28:25.368446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.434 [2024-11-27 07:28:25.368487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.434 [2024-11-27 07:28:25.368498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.434 [2024-11-27 07:28:25.368740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.434 [2024-11-27 07:28:25.368964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.434 [2024-11-27 07:28:25.368974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.434 [2024-11-27 07:28:25.368981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.434 [2024-11-27 07:28:25.368990] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.434 [2024-11-27 07:28:25.381856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.434 [2024-11-27 07:28:25.382439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.434 [2024-11-27 07:28:25.382460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.434 [2024-11-27 07:28:25.382469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.434 [2024-11-27 07:28:25.382690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.434 [2024-11-27 07:28:25.382910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.434 [2024-11-27 07:28:25.382918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.434 [2024-11-27 07:28:25.382925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.434 [2024-11-27 07:28:25.382932] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.434 [2024-11-27 07:28:25.395760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.434 [2024-11-27 07:28:25.396393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.434 [2024-11-27 07:28:25.396435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.434 [2024-11-27 07:28:25.396446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.434 [2024-11-27 07:28:25.396688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.434 [2024-11-27 07:28:25.396913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.434 [2024-11-27 07:28:25.396927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.434 [2024-11-27 07:28:25.396935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.434 [2024-11-27 07:28:25.396943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.434 [2024-11-27 07:28:25.409591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.434 [2024-11-27 07:28:25.410255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.434 [2024-11-27 07:28:25.410299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.434 [2024-11-27 07:28:25.410312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.434 [2024-11-27 07:28:25.410556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.434 [2024-11-27 07:28:25.410781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.434 [2024-11-27 07:28:25.410790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.434 [2024-11-27 07:28:25.410798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.434 [2024-11-27 07:28:25.410806] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.434 [2024-11-27 07:28:25.423452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.434 [2024-11-27 07:28:25.424048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.434 [2024-11-27 07:28:25.424069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.434 [2024-11-27 07:28:25.424077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.434 [2024-11-27 07:28:25.424304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.434 [2024-11-27 07:28:25.424525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.434 [2024-11-27 07:28:25.424534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.434 [2024-11-27 07:28:25.424541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.434 [2024-11-27 07:28:25.424548] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.434 [2024-11-27 07:28:25.437401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.434 [2024-11-27 07:28:25.437985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.434 [2024-11-27 07:28:25.438004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.434 [2024-11-27 07:28:25.438012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.434 [2024-11-27 07:28:25.438238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.435 [2024-11-27 07:28:25.438460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.435 [2024-11-27 07:28:25.438469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.435 [2024-11-27 07:28:25.438477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.435 [2024-11-27 07:28:25.438483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.435 [2024-11-27 07:28:25.451325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.435 [2024-11-27 07:28:25.451877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.435 [2024-11-27 07:28:25.451896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.435 [2024-11-27 07:28:25.451905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.435 [2024-11-27 07:28:25.452126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.435 [2024-11-27 07:28:25.452355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.435 [2024-11-27 07:28:25.452364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.435 [2024-11-27 07:28:25.452371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.435 [2024-11-27 07:28:25.452378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.435 [2024-11-27 07:28:25.465208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.435 [2024-11-27 07:28:25.465743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.435 [2024-11-27 07:28:25.465761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.435 [2024-11-27 07:28:25.465771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.435 [2024-11-27 07:28:25.465992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.435 [2024-11-27 07:28:25.466219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.435 [2024-11-27 07:28:25.466229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.435 [2024-11-27 07:28:25.466237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.435 [2024-11-27 07:28:25.466245] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.435 [2024-11-27 07:28:25.479074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.435 [2024-11-27 07:28:25.479636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.435 [2024-11-27 07:28:25.479655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.435 [2024-11-27 07:28:25.479663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.435 [2024-11-27 07:28:25.479884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.435 [2024-11-27 07:28:25.480105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.435 [2024-11-27 07:28:25.480114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.435 [2024-11-27 07:28:25.480121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.435 [2024-11-27 07:28:25.480128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.435 [2024-11-27 07:28:25.492978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.435 [2024-11-27 07:28:25.493625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.435 [2024-11-27 07:28:25.493681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.435 [2024-11-27 07:28:25.493693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.435 [2024-11-27 07:28:25.493941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.435 [2024-11-27 07:28:25.494176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.435 [2024-11-27 07:28:25.494187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.435 [2024-11-27 07:28:25.494195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.435 [2024-11-27 07:28:25.494203] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.435 [2024-11-27 07:28:25.506893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.435 [2024-11-27 07:28:25.507478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.435 [2024-11-27 07:28:25.507504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.435 [2024-11-27 07:28:25.507513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.435 [2024-11-27 07:28:25.507737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.435 [2024-11-27 07:28:25.507959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.435 [2024-11-27 07:28:25.507969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.435 [2024-11-27 07:28:25.507977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.435 [2024-11-27 07:28:25.507984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.435 [2024-11-27 07:28:25.520857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.435 [2024-11-27 07:28:25.521554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.435 [2024-11-27 07:28:25.521613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.435 [2024-11-27 07:28:25.521626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.435 [2024-11-27 07:28:25.521879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.435 [2024-11-27 07:28:25.522106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.435 [2024-11-27 07:28:25.522116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.435 [2024-11-27 07:28:25.522124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.435 [2024-11-27 07:28:25.522133] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.435 [2024-11-27 07:28:25.534828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.435 [2024-11-27 07:28:25.535302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.435 [2024-11-27 07:28:25.535335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.435 [2024-11-27 07:28:25.535344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.435 [2024-11-27 07:28:25.535581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.435 [2024-11-27 07:28:25.535804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.435 [2024-11-27 07:28:25.535813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.435 [2024-11-27 07:28:25.535821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.435 [2024-11-27 07:28:25.535830] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.435 [2024-11-27 07:28:25.548713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.435 [2024-11-27 07:28:25.549314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.435 [2024-11-27 07:28:25.549378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.435 [2024-11-27 07:28:25.549392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.435 [2024-11-27 07:28:25.549650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.435 [2024-11-27 07:28:25.549878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.435 [2024-11-27 07:28:25.549887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.435 [2024-11-27 07:28:25.549896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.435 [2024-11-27 07:28:25.549905] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.435 [2024-11-27 07:28:25.562599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.435 [2024-11-27 07:28:25.563276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.435 [2024-11-27 07:28:25.563337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.435 [2024-11-27 07:28:25.563350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.435 [2024-11-27 07:28:25.563607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.435 [2024-11-27 07:28:25.563835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.435 [2024-11-27 07:28:25.563847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.435 [2024-11-27 07:28:25.563855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.436 [2024-11-27 07:28:25.563865] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.436 [2024-11-27 07:28:25.576565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.436 [2024-11-27 07:28:25.577192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.436 [2024-11-27 07:28:25.577222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.436 [2024-11-27 07:28:25.577231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.436 [2024-11-27 07:28:25.577455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.436 [2024-11-27 07:28:25.577678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.436 [2024-11-27 07:28:25.577697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.436 [2024-11-27 07:28:25.577705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.436 [2024-11-27 07:28:25.577713] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.436 [2024-11-27 07:28:25.590399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.436 [2024-11-27 07:28:25.590969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.436 [2024-11-27 07:28:25.590993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.436 [2024-11-27 07:28:25.591002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.436 [2024-11-27 07:28:25.591233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.436 [2024-11-27 07:28:25.591456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.436 [2024-11-27 07:28:25.591465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.436 [2024-11-27 07:28:25.591473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.436 [2024-11-27 07:28:25.591482] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.436 [2024-11-27 07:28:25.604347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.436 [2024-11-27 07:28:25.604928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.436 [2024-11-27 07:28:25.604950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.436 [2024-11-27 07:28:25.604959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.436 [2024-11-27 07:28:25.605191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.436 [2024-11-27 07:28:25.605413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.436 [2024-11-27 07:28:25.605422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.436 [2024-11-27 07:28:25.605429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.436 [2024-11-27 07:28:25.605437] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.436 [2024-11-27 07:28:25.618309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.436 [2024-11-27 07:28:25.618872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.436 [2024-11-27 07:28:25.618894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.436 [2024-11-27 07:28:25.618903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.436 [2024-11-27 07:28:25.619126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.436 [2024-11-27 07:28:25.619358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.436 [2024-11-27 07:28:25.619368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.436 [2024-11-27 07:28:25.619376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.436 [2024-11-27 07:28:25.619384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.436 [2024-11-27 07:28:25.632288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.436 [2024-11-27 07:28:25.632861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.436 [2024-11-27 07:28:25.632923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.436 [2024-11-27 07:28:25.632935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.436 [2024-11-27 07:28:25.633202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.436 [2024-11-27 07:28:25.633431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.436 [2024-11-27 07:28:25.633442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.436 [2024-11-27 07:28:25.633451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.436 [2024-11-27 07:28:25.633460] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.699 [2024-11-27 07:28:25.646146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.699 [2024-11-27 07:28:25.646781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-11-27 07:28:25.646808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.699 [2024-11-27 07:28:25.646817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.699 [2024-11-27 07:28:25.647040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.699 [2024-11-27 07:28:25.647272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.699 [2024-11-27 07:28:25.647283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.699 [2024-11-27 07:28:25.647291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.699 [2024-11-27 07:28:25.647298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.699 [2024-11-27 07:28:25.660153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.699 [2024-11-27 07:28:25.660724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-11-27 07:28:25.660750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.699 [2024-11-27 07:28:25.660758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.699 [2024-11-27 07:28:25.660980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.699 [2024-11-27 07:28:25.661209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.699 [2024-11-27 07:28:25.661222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.699 [2024-11-27 07:28:25.661230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.699 [2024-11-27 07:28:25.661238] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.699 [2024-11-27 07:28:25.674102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.699 [2024-11-27 07:28:25.674672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-11-27 07:28:25.674704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.699 [2024-11-27 07:28:25.674713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.699 [2024-11-27 07:28:25.674935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.699 [2024-11-27 07:28:25.675164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.699 [2024-11-27 07:28:25.675173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.699 [2024-11-27 07:28:25.675181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.699 [2024-11-27 07:28:25.675188] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.699 [2024-11-27 07:28:25.688054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.699 [2024-11-27 07:28:25.688682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-11-27 07:28:25.688708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.699 [2024-11-27 07:28:25.688716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.699 [2024-11-27 07:28:25.688939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.699 [2024-11-27 07:28:25.689168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.699 [2024-11-27 07:28:25.689178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.699 [2024-11-27 07:28:25.689186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.699 [2024-11-27 07:28:25.689193] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.699 [2024-11-27 07:28:25.702032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.700 [2024-11-27 07:28:25.702726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-11-27 07:28:25.702787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.700 [2024-11-27 07:28:25.702801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.700 [2024-11-27 07:28:25.703058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.700 [2024-11-27 07:28:25.703304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.700 [2024-11-27 07:28:25.703314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.700 [2024-11-27 07:28:25.703324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.700 [2024-11-27 07:28:25.703335] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.700 [2024-11-27 07:28:25.716176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.700 [2024-11-27 07:28:25.716854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-11-27 07:28:25.716917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.700 [2024-11-27 07:28:25.716930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.700 [2024-11-27 07:28:25.717209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.700 [2024-11-27 07:28:25.717439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.700 [2024-11-27 07:28:25.717449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.700 [2024-11-27 07:28:25.717457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.700 [2024-11-27 07:28:25.717466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.700 [2024-11-27 07:28:25.730147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.700 [2024-11-27 07:28:25.730865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-11-27 07:28:25.730927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.700 [2024-11-27 07:28:25.730939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.700 [2024-11-27 07:28:25.731210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.700 [2024-11-27 07:28:25.731439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.700 [2024-11-27 07:28:25.731448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.700 [2024-11-27 07:28:25.731456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.700 [2024-11-27 07:28:25.731465] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.700 [2024-11-27 07:28:25.744103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.700 [2024-11-27 07:28:25.744753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-11-27 07:28:25.744816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.700 [2024-11-27 07:28:25.744829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.700 [2024-11-27 07:28:25.745085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.700 [2024-11-27 07:28:25.745328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.700 [2024-11-27 07:28:25.745338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.700 [2024-11-27 07:28:25.745347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.700 [2024-11-27 07:28:25.745356] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.700 [2024-11-27 07:28:25.758010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.700 [2024-11-27 07:28:25.758695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-11-27 07:28:25.758758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.700 [2024-11-27 07:28:25.758770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.700 [2024-11-27 07:28:25.759027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.700 [2024-11-27 07:28:25.759267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.700 [2024-11-27 07:28:25.759285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.700 [2024-11-27 07:28:25.759293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.700 [2024-11-27 07:28:25.759302] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.700 [2024-11-27 07:28:25.771959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.700 [2024-11-27 07:28:25.772696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-11-27 07:28:25.772759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.700 [2024-11-27 07:28:25.772772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.700 [2024-11-27 07:28:25.773028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.700 [2024-11-27 07:28:25.773271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.700 [2024-11-27 07:28:25.773282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.700 [2024-11-27 07:28:25.773291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.700 [2024-11-27 07:28:25.773300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.700 [2024-11-27 07:28:25.785975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.700 [2024-11-27 07:28:25.786603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-11-27 07:28:25.786631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.700 [2024-11-27 07:28:25.786641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.700 [2024-11-27 07:28:25.786864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.700 [2024-11-27 07:28:25.787086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.700 [2024-11-27 07:28:25.787096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.700 [2024-11-27 07:28:25.787103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.700 [2024-11-27 07:28:25.787111] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.700 [2024-11-27 07:28:25.799967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.700 [2024-11-27 07:28:25.800540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-11-27 07:28:25.800565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.700 [2024-11-27 07:28:25.800573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.700 [2024-11-27 07:28:25.800795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.700 [2024-11-27 07:28:25.801016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.700 [2024-11-27 07:28:25.801026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.700 [2024-11-27 07:28:25.801034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.700 [2024-11-27 07:28:25.801041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.700 [2024-11-27 07:28:25.813904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.700 [2024-11-27 07:28:25.814525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-11-27 07:28:25.814549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.700 [2024-11-27 07:28:25.814557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.700 [2024-11-27 07:28:25.814780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.700 [2024-11-27 07:28:25.815002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.700 [2024-11-27 07:28:25.815012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.700 [2024-11-27 07:28:25.815019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.700 [2024-11-27 07:28:25.815026] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.700 [2024-11-27 07:28:25.827884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.700 [2024-11-27 07:28:25.828577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-11-27 07:28:25.828638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.700 [2024-11-27 07:28:25.828651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.700 [2024-11-27 07:28:25.828907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.700 [2024-11-27 07:28:25.829135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.700 [2024-11-27 07:28:25.829144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.700 [2024-11-27 07:28:25.829153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.700 [2024-11-27 07:28:25.829177] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.701 [2024-11-27 07:28:25.841852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.701 [2024-11-27 07:28:25.842470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.701 [2024-11-27 07:28:25.842531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.701 [2024-11-27 07:28:25.842544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.701 [2024-11-27 07:28:25.842800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.701 [2024-11-27 07:28:25.843029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.701 [2024-11-27 07:28:25.843038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.701 [2024-11-27 07:28:25.843047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.701 [2024-11-27 07:28:25.843056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.701 [2024-11-27 07:28:25.855724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.701 [2024-11-27 07:28:25.856483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.701 [2024-11-27 07:28:25.856552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.701 [2024-11-27 07:28:25.856566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.701 [2024-11-27 07:28:25.856823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.701 [2024-11-27 07:28:25.857050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.701 [2024-11-27 07:28:25.857059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.701 [2024-11-27 07:28:25.857068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.701 [2024-11-27 07:28:25.857077] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.701 [2024-11-27 07:28:25.869755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.701 [2024-11-27 07:28:25.870511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.701 [2024-11-27 07:28:25.870574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.701 [2024-11-27 07:28:25.870586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.701 [2024-11-27 07:28:25.870844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.701 [2024-11-27 07:28:25.871071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.701 [2024-11-27 07:28:25.871080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.701 [2024-11-27 07:28:25.871088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.701 [2024-11-27 07:28:25.871097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.701 [2024-11-27 07:28:25.883773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.701 [2024-11-27 07:28:25.884510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.701 [2024-11-27 07:28:25.884573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.701 [2024-11-27 07:28:25.884586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.701 [2024-11-27 07:28:25.884842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.701 [2024-11-27 07:28:25.885070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.701 [2024-11-27 07:28:25.885079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.701 [2024-11-27 07:28:25.885087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.701 [2024-11-27 07:28:25.885096] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.701 [2024-11-27 07:28:25.897780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.701 [2024-11-27 07:28:25.898374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.701 [2024-11-27 07:28:25.898403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.701 [2024-11-27 07:28:25.898412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.701 [2024-11-27 07:28:25.898645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.701 [2024-11-27 07:28:25.898867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.701 [2024-11-27 07:28:25.898877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.701 [2024-11-27 07:28:25.898884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.701 [2024-11-27 07:28:25.898892] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.964 [2024-11-27 07:28:25.911747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.964 [2024-11-27 07:28:25.912420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.964 [2024-11-27 07:28:25.912481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.964 [2024-11-27 07:28:25.912494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.964 [2024-11-27 07:28:25.912750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.964 [2024-11-27 07:28:25.912978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.964 [2024-11-27 07:28:25.912987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.964 [2024-11-27 07:28:25.912995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.964 [2024-11-27 07:28:25.913005] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.964 [2024-11-27 07:28:25.925681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.964 [2024-11-27 07:28:25.926311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.964 [2024-11-27 07:28:25.926374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.964 [2024-11-27 07:28:25.926388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.964 [2024-11-27 07:28:25.926645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.964 [2024-11-27 07:28:25.926873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.964 [2024-11-27 07:28:25.926883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.964 [2024-11-27 07:28:25.926892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.964 [2024-11-27 07:28:25.926901] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.964 [2024-11-27 07:28:25.939611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.965 [2024-11-27 07:28:25.940272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.965 [2024-11-27 07:28:25.940336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.965 [2024-11-27 07:28:25.940350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.965 [2024-11-27 07:28:25.940607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.965 [2024-11-27 07:28:25.940835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.965 [2024-11-27 07:28:25.940852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.965 [2024-11-27 07:28:25.940861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.965 [2024-11-27 07:28:25.940870] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.965 [2024-11-27 07:28:25.952363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.965 [2024-11-27 07:28:25.952983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.965 [2024-11-27 07:28:25.953039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.965 [2024-11-27 07:28:25.953048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.965 [2024-11-27 07:28:25.953244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.965 [2024-11-27 07:28:25.953403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.965 [2024-11-27 07:28:25.953410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.965 [2024-11-27 07:28:25.953416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.965 [2024-11-27 07:28:25.953423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.965 [2024-11-27 07:28:25.965008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.965 [2024-11-27 07:28:25.965596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.965 [2024-11-27 07:28:25.965620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.965 [2024-11-27 07:28:25.965627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.965 [2024-11-27 07:28:25.965782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.965 [2024-11-27 07:28:25.965934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.965 [2024-11-27 07:28:25.965941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.965 [2024-11-27 07:28:25.965947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.965 [2024-11-27 07:28:25.965952] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.965 [2024-11-27 07:28:25.977656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.965 [2024-11-27 07:28:25.978228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.965 [2024-11-27 07:28:25.978276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.965 [2024-11-27 07:28:25.978286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.965 [2024-11-27 07:28:25.978466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.965 [2024-11-27 07:28:25.978623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.965 [2024-11-27 07:28:25.978630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.965 [2024-11-27 07:28:25.978636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.965 [2024-11-27 07:28:25.978643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.965 [2024-11-27 07:28:25.990372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.965 [2024-11-27 07:28:25.990939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.965 [2024-11-27 07:28:25.990982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.965 [2024-11-27 07:28:25.990990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.965 [2024-11-27 07:28:25.991176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.965 [2024-11-27 07:28:25.991333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.965 [2024-11-27 07:28:25.991339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.965 [2024-11-27 07:28:25.991345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.965 [2024-11-27 07:28:25.991351] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.965 [2024-11-27 07:28:26.003039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.965 [2024-11-27 07:28:26.003508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.965 [2024-11-27 07:28:26.003547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.965 [2024-11-27 07:28:26.003555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.965 [2024-11-27 07:28:26.003729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.965 [2024-11-27 07:28:26.003885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.965 [2024-11-27 07:28:26.003891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.965 [2024-11-27 07:28:26.003897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.965 [2024-11-27 07:28:26.003903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.965 [2024-11-27 07:28:26.015741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.965 [2024-11-27 07:28:26.016284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.965 [2024-11-27 07:28:26.016324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.965 [2024-11-27 07:28:26.016333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.965 [2024-11-27 07:28:26.016506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.965 [2024-11-27 07:28:26.016662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.965 [2024-11-27 07:28:26.016668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.965 [2024-11-27 07:28:26.016674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.965 [2024-11-27 07:28:26.016681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.965 [2024-11-27 07:28:26.028370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.965 [2024-11-27 07:28:26.028870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.965 [2024-11-27 07:28:26.028891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.965 [2024-11-27 07:28:26.028897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.965 [2024-11-27 07:28:26.029050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.965 [2024-11-27 07:28:26.029215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.965 [2024-11-27 07:28:26.029222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.965 [2024-11-27 07:28:26.029227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.965 [2024-11-27 07:28:26.029233] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.965 [2024-11-27 07:28:26.041058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.965 [2024-11-27 07:28:26.041581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.965 [2024-11-27 07:28:26.041596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.965 [2024-11-27 07:28:26.041602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.965 [2024-11-27 07:28:26.041753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.965 [2024-11-27 07:28:26.041905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.965 [2024-11-27 07:28:26.041911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.965 [2024-11-27 07:28:26.041916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.965 [2024-11-27 07:28:26.041921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.965 [2024-11-27 07:28:26.053738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.965 [2024-11-27 07:28:26.054281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.965 [2024-11-27 07:28:26.054314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.965 [2024-11-27 07:28:26.054323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.965 [2024-11-27 07:28:26.054495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.965 [2024-11-27 07:28:26.054650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.965 [2024-11-27 07:28:26.054656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.965 [2024-11-27 07:28:26.054661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.966 [2024-11-27 07:28:26.054668] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.966 [2024-11-27 07:28:26.066499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.966 [2024-11-27 07:28:26.067071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.966 [2024-11-27 07:28:26.067104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.966 [2024-11-27 07:28:26.067112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.966 [2024-11-27 07:28:26.067292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.966 [2024-11-27 07:28:26.067447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.966 [2024-11-27 07:28:26.067453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.966 [2024-11-27 07:28:26.067459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.966 [2024-11-27 07:28:26.067464] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.966 [2024-11-27 07:28:26.079144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.966 [2024-11-27 07:28:26.079678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.966 [2024-11-27 07:28:26.079693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.966 [2024-11-27 07:28:26.079699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.966 [2024-11-27 07:28:26.079851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.966 [2024-11-27 07:28:26.080003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.966 [2024-11-27 07:28:26.080009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.966 [2024-11-27 07:28:26.080014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.966 [2024-11-27 07:28:26.080018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.966 7219.75 IOPS, 28.20 MiB/s [2024-11-27T06:28:26.171Z] [2024-11-27 07:28:26.091831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.966 [2024-11-27 07:28:26.092475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.966 [2024-11-27 07:28:26.092505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.966 [2024-11-27 07:28:26.092513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.966 [2024-11-27 07:28:26.092680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.966 [2024-11-27 07:28:26.092834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.966 [2024-11-27 07:28:26.092840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.966 [2024-11-27 07:28:26.092846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.966 [2024-11-27 07:28:26.092852] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.966 [2024-11-27 07:28:26.104535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.966 [2024-11-27 07:28:26.105038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.966 [2024-11-27 07:28:26.105053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.966 [2024-11-27 07:28:26.105059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.966 [2024-11-27 07:28:26.105215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.966 [2024-11-27 07:28:26.105367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.966 [2024-11-27 07:28:26.105377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.966 [2024-11-27 07:28:26.105383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.966 [2024-11-27 07:28:26.105388] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.966 [2024-11-27 07:28:26.117187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.966 [2024-11-27 07:28:26.117752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.966 [2024-11-27 07:28:26.117782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.966 [2024-11-27 07:28:26.117791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.966 [2024-11-27 07:28:26.117958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.966 [2024-11-27 07:28:26.118112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.966 [2024-11-27 07:28:26.118118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.966 [2024-11-27 07:28:26.118124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.966 [2024-11-27 07:28:26.118129] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.966 [2024-11-27 07:28:26.129812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.966 [2024-11-27 07:28:26.130394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.966 [2024-11-27 07:28:26.130424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.966 [2024-11-27 07:28:26.130433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.966 [2024-11-27 07:28:26.130600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.966 [2024-11-27 07:28:26.130754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.966 [2024-11-27 07:28:26.130760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.966 [2024-11-27 07:28:26.130766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.966 [2024-11-27 07:28:26.130772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.966 [2024-11-27 07:28:26.142464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.966 [2024-11-27 07:28:26.143035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.966 [2024-11-27 07:28:26.143065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.966 [2024-11-27 07:28:26.143073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.966 [2024-11-27 07:28:26.143247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.966 [2024-11-27 07:28:26.143402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.966 [2024-11-27 07:28:26.143408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.966 [2024-11-27 07:28:26.143414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.966 [2024-11-27 07:28:26.143423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:14.966 [2024-11-27 07:28:26.155098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:14.966 [2024-11-27 07:28:26.155652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.966 [2024-11-27 07:28:26.155682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:14.966 [2024-11-27 07:28:26.155691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:14.966 [2024-11-27 07:28:26.155858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:14.966 [2024-11-27 07:28:26.156012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:14.966 [2024-11-27 07:28:26.156018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:14.966 [2024-11-27 07:28:26.156024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:14.966 [2024-11-27 07:28:26.156030] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.230 [2024-11-27 07:28:26.167860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.230 [2024-11-27 07:28:26.168472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.230 [2024-11-27 07:28:26.168502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.230 [2024-11-27 07:28:26.168511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.230 [2024-11-27 07:28:26.168678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.230 [2024-11-27 07:28:26.168832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.230 [2024-11-27 07:28:26.168838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.230 [2024-11-27 07:28:26.168844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.230 [2024-11-27 07:28:26.168849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.230 [2024-11-27 07:28:26.180536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.230 [2024-11-27 07:28:26.181104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.230 [2024-11-27 07:28:26.181134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.230 [2024-11-27 07:28:26.181143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.230 [2024-11-27 07:28:26.181320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.230 [2024-11-27 07:28:26.181475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.230 [2024-11-27 07:28:26.181481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.230 [2024-11-27 07:28:26.181487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.230 [2024-11-27 07:28:26.181493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.230 [2024-11-27 07:28:26.193181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.230 [2024-11-27 07:28:26.193755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.231 [2024-11-27 07:28:26.193789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.231 [2024-11-27 07:28:26.193797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.231 [2024-11-27 07:28:26.193964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.231 [2024-11-27 07:28:26.194118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.231 [2024-11-27 07:28:26.194124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.231 [2024-11-27 07:28:26.194130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.231 [2024-11-27 07:28:26.194135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.231 [2024-11-27 07:28:26.205816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.231 [2024-11-27 07:28:26.206456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.231 [2024-11-27 07:28:26.206486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.231 [2024-11-27 07:28:26.206495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.231 [2024-11-27 07:28:26.206663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.231 [2024-11-27 07:28:26.206818] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.231 [2024-11-27 07:28:26.206825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.231 [2024-11-27 07:28:26.206830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.231 [2024-11-27 07:28:26.206836] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.231 [2024-11-27 07:28:26.218522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.231 [2024-11-27 07:28:26.219064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.231 [2024-11-27 07:28:26.219095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.231 [2024-11-27 07:28:26.219104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.231 [2024-11-27 07:28:26.219280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.231 [2024-11-27 07:28:26.219435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.231 [2024-11-27 07:28:26.219441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.231 [2024-11-27 07:28:26.219447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.231 [2024-11-27 07:28:26.219452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.231 [2024-11-27 07:28:26.231275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.231 [2024-11-27 07:28:26.231749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.231 [2024-11-27 07:28:26.231778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.231 [2024-11-27 07:28:26.231787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.231 [2024-11-27 07:28:26.231958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.231 [2024-11-27 07:28:26.232112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.231 [2024-11-27 07:28:26.232118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.231 [2024-11-27 07:28:26.232123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.231 [2024-11-27 07:28:26.232129] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.231 [2024-11-27 07:28:26.243969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.231 [2024-11-27 07:28:26.244528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.231 [2024-11-27 07:28:26.244559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.231 [2024-11-27 07:28:26.244567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.231 [2024-11-27 07:28:26.244734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.231 [2024-11-27 07:28:26.244889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.231 [2024-11-27 07:28:26.244895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.231 [2024-11-27 07:28:26.244900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.231 [2024-11-27 07:28:26.244906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.231 [2024-11-27 07:28:26.256730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.231 [2024-11-27 07:28:26.257284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.231 [2024-11-27 07:28:26.257314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.231 [2024-11-27 07:28:26.257323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.231 [2024-11-27 07:28:26.257493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.231 [2024-11-27 07:28:26.257647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.231 [2024-11-27 07:28:26.257654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.231 [2024-11-27 07:28:26.257659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.231 [2024-11-27 07:28:26.257664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.231 [2024-11-27 07:28:26.269495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.231 [2024-11-27 07:28:26.270070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.231 [2024-11-27 07:28:26.270100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.231 [2024-11-27 07:28:26.270108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.231 [2024-11-27 07:28:26.270284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.231 [2024-11-27 07:28:26.270438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.231 [2024-11-27 07:28:26.270448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.231 [2024-11-27 07:28:26.270454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.231 [2024-11-27 07:28:26.270460] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.231 [2024-11-27 07:28:26.282144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.231 [2024-11-27 07:28:26.282688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.231 [2024-11-27 07:28:26.282718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.231 [2024-11-27 07:28:26.282727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.231 [2024-11-27 07:28:26.282894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.231 [2024-11-27 07:28:26.283048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.231 [2024-11-27 07:28:26.283054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.231 [2024-11-27 07:28:26.283059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.231 [2024-11-27 07:28:26.283065] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.231 [2024-11-27 07:28:26.294905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.231 [2024-11-27 07:28:26.295447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.231 [2024-11-27 07:28:26.295477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.231 [2024-11-27 07:28:26.295486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.231 [2024-11-27 07:28:26.295653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.231 [2024-11-27 07:28:26.295808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.231 [2024-11-27 07:28:26.295814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.231 [2024-11-27 07:28:26.295820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.231 [2024-11-27 07:28:26.295825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.231 [2024-11-27 07:28:26.307637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.231 [2024-11-27 07:28:26.307979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.231 [2024-11-27 07:28:26.307995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.231 [2024-11-27 07:28:26.308001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.231 [2024-11-27 07:28:26.308153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.231 [2024-11-27 07:28:26.308310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.231 [2024-11-27 07:28:26.308317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.231 [2024-11-27 07:28:26.308322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.231 [2024-11-27 07:28:26.308330] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.231 [2024-11-27 07:28:26.320291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.232 [2024-11-27 07:28:26.320778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.232 [2024-11-27 07:28:26.320791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.232 [2024-11-27 07:28:26.320796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.232 [2024-11-27 07:28:26.320947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.232 [2024-11-27 07:28:26.321098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.232 [2024-11-27 07:28:26.321105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.232 [2024-11-27 07:28:26.321110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.232 [2024-11-27 07:28:26.321114] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.232 [2024-11-27 07:28:26.332954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.232 [2024-11-27 07:28:26.333499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.232 [2024-11-27 07:28:26.333529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.232 [2024-11-27 07:28:26.333537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.232 [2024-11-27 07:28:26.333704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.232 [2024-11-27 07:28:26.333858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.232 [2024-11-27 07:28:26.333864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.232 [2024-11-27 07:28:26.333870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.232 [2024-11-27 07:28:26.333875] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.232 [2024-11-27 07:28:26.345707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.232 [2024-11-27 07:28:26.346257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.232 [2024-11-27 07:28:26.346287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.232 [2024-11-27 07:28:26.346296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.232 [2024-11-27 07:28:26.346463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.232 [2024-11-27 07:28:26.346617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.232 [2024-11-27 07:28:26.346623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.232 [2024-11-27 07:28:26.346628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.232 [2024-11-27 07:28:26.346634] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.232 [2024-11-27 07:28:26.358460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.232 [2024-11-27 07:28:26.359028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.232 [2024-11-27 07:28:26.359061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.232 [2024-11-27 07:28:26.359070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.232 [2024-11-27 07:28:26.359244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.232 [2024-11-27 07:28:26.359398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.232 [2024-11-27 07:28:26.359404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.232 [2024-11-27 07:28:26.359410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.232 [2024-11-27 07:28:26.359415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.232 [2024-11-27 07:28:26.371090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.232 [2024-11-27 07:28:26.371660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.232 [2024-11-27 07:28:26.371691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.232 [2024-11-27 07:28:26.371699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.232 [2024-11-27 07:28:26.371866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.232 [2024-11-27 07:28:26.372020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.232 [2024-11-27 07:28:26.372026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.232 [2024-11-27 07:28:26.372032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.232 [2024-11-27 07:28:26.372037] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.232 [2024-11-27 07:28:26.383724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.232 [2024-11-27 07:28:26.384278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.232 [2024-11-27 07:28:26.384308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.232 [2024-11-27 07:28:26.384316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.232 [2024-11-27 07:28:26.384486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.232 [2024-11-27 07:28:26.384648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.232 [2024-11-27 07:28:26.384655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.232 [2024-11-27 07:28:26.384661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.232 [2024-11-27 07:28:26.384666] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.232 [2024-11-27 07:28:26.396494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.232 [2024-11-27 07:28:26.397063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.232 [2024-11-27 07:28:26.397093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.232 [2024-11-27 07:28:26.397101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.232 [2024-11-27 07:28:26.397283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.232 [2024-11-27 07:28:26.397438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.232 [2024-11-27 07:28:26.397444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.232 [2024-11-27 07:28:26.397449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.232 [2024-11-27 07:28:26.397455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.232 [2024-11-27 07:28:26.409124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.232 [2024-11-27 07:28:26.409698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.232 [2024-11-27 07:28:26.409728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.232 [2024-11-27 07:28:26.409737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.232 [2024-11-27 07:28:26.409904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.232 [2024-11-27 07:28:26.410058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.232 [2024-11-27 07:28:26.410064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.232 [2024-11-27 07:28:26.410069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.232 [2024-11-27 07:28:26.410075] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.232 [2024-11-27 07:28:26.421765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.232 [2024-11-27 07:28:26.422278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.232 [2024-11-27 07:28:26.422308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.232 [2024-11-27 07:28:26.422317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.232 [2024-11-27 07:28:26.422487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.232 [2024-11-27 07:28:26.422642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.232 [2024-11-27 07:28:26.422648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.232 [2024-11-27 07:28:26.422653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.232 [2024-11-27 07:28:26.422659] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.496 [2024-11-27 07:28:26.434489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.496 [2024-11-27 07:28:26.434985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.496 [2024-11-27 07:28:26.434999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.496 [2024-11-27 07:28:26.435005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.496 [2024-11-27 07:28:26.435157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.496 [2024-11-27 07:28:26.435320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.496 [2024-11-27 07:28:26.435334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.496 [2024-11-27 07:28:26.435339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.496 [2024-11-27 07:28:26.435344] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.496 [2024-11-27 07:28:26.447166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.496 [2024-11-27 07:28:26.447663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.496 [2024-11-27 07:28:26.447676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.496 [2024-11-27 07:28:26.447681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.496 [2024-11-27 07:28:26.447833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.496 [2024-11-27 07:28:26.447984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.496 [2024-11-27 07:28:26.447990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.496 [2024-11-27 07:28:26.447995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.496 [2024-11-27 07:28:26.448000] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.496 [2024-11-27 07:28:26.459825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.496 [2024-11-27 07:28:26.460371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.496 [2024-11-27 07:28:26.460401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.496 [2024-11-27 07:28:26.460410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.496 [2024-11-27 07:28:26.460580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.496 [2024-11-27 07:28:26.460734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.496 [2024-11-27 07:28:26.460740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.497 [2024-11-27 07:28:26.460745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.497 [2024-11-27 07:28:26.460751] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.497 [2024-11-27 07:28:26.472587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.497 [2024-11-27 07:28:26.473130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.497 [2024-11-27 07:28:26.473166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.497 [2024-11-27 07:28:26.473176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.497 [2024-11-27 07:28:26.473346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.497 [2024-11-27 07:28:26.473500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.497 [2024-11-27 07:28:26.473506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.497 [2024-11-27 07:28:26.473512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.497 [2024-11-27 07:28:26.473521] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.497 [2024-11-27 07:28:26.485361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.497 [2024-11-27 07:28:26.485933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.497 [2024-11-27 07:28:26.485963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.497 [2024-11-27 07:28:26.485972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.497 [2024-11-27 07:28:26.486138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.497 [2024-11-27 07:28:26.486300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.497 [2024-11-27 07:28:26.486307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.497 [2024-11-27 07:28:26.486312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.497 [2024-11-27 07:28:26.486318] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.497 [2024-11-27 07:28:26.498001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.497 [2024-11-27 07:28:26.498574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.497 [2024-11-27 07:28:26.498605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.497 [2024-11-27 07:28:26.498613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.497 [2024-11-27 07:28:26.498780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.497 [2024-11-27 07:28:26.498934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.497 [2024-11-27 07:28:26.498940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.497 [2024-11-27 07:28:26.498946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.497 [2024-11-27 07:28:26.498952] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.497 [2024-11-27 07:28:26.510643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.497 [2024-11-27 07:28:26.511104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.497 [2024-11-27 07:28:26.511133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.497 [2024-11-27 07:28:26.511142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.497 [2024-11-27 07:28:26.511316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.497 [2024-11-27 07:28:26.511471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.497 [2024-11-27 07:28:26.511477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.497 [2024-11-27 07:28:26.511483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.497 [2024-11-27 07:28:26.511488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.497 [2024-11-27 07:28:26.523308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.497 [2024-11-27 07:28:26.523877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.497 [2024-11-27 07:28:26.523911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.497 [2024-11-27 07:28:26.523919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.497 [2024-11-27 07:28:26.524086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.497 [2024-11-27 07:28:26.524249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.497 [2024-11-27 07:28:26.524256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.497 [2024-11-27 07:28:26.524261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.497 [2024-11-27 07:28:26.524267] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.497 [2024-11-27 07:28:26.535946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.497 [2024-11-27 07:28:26.536523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.497 [2024-11-27 07:28:26.536553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.497 [2024-11-27 07:28:26.536561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.497 [2024-11-27 07:28:26.536728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.497 [2024-11-27 07:28:26.536882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.497 [2024-11-27 07:28:26.536888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.497 [2024-11-27 07:28:26.536893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.497 [2024-11-27 07:28:26.536899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.497 [2024-11-27 07:28:26.548591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.497 [2024-11-27 07:28:26.549210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.497 [2024-11-27 07:28:26.549241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.497 [2024-11-27 07:28:26.549250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.497 [2024-11-27 07:28:26.549419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.497 [2024-11-27 07:28:26.549573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.497 [2024-11-27 07:28:26.549579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.497 [2024-11-27 07:28:26.549584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.497 [2024-11-27 07:28:26.549590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.497 [2024-11-27 07:28:26.561288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.497 [2024-11-27 07:28:26.561860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.497 [2024-11-27 07:28:26.561890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.497 [2024-11-27 07:28:26.561898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.497 [2024-11-27 07:28:26.562069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.497 [2024-11-27 07:28:26.562230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.497 [2024-11-27 07:28:26.562238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.497 [2024-11-27 07:28:26.562243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.497 [2024-11-27 07:28:26.562249] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.497 [2024-11-27 07:28:26.573930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.497 [2024-11-27 07:28:26.574397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.497 [2024-11-27 07:28:26.574412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.497 [2024-11-27 07:28:26.574418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.497 [2024-11-27 07:28:26.574570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.497 [2024-11-27 07:28:26.574721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.497 [2024-11-27 07:28:26.574726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.497 [2024-11-27 07:28:26.574731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.497 [2024-11-27 07:28:26.574736] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.497 [2024-11-27 07:28:26.586704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.497 [2024-11-27 07:28:26.587273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.497 [2024-11-27 07:28:26.587303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.497 [2024-11-27 07:28:26.587312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.497 [2024-11-27 07:28:26.587479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.498 [2024-11-27 07:28:26.587633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.498 [2024-11-27 07:28:26.587638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.498 [2024-11-27 07:28:26.587644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.498 [2024-11-27 07:28:26.587649] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.498 [2024-11-27 07:28:26.599335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.498 [2024-11-27 07:28:26.599909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.498 [2024-11-27 07:28:26.599939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.498 [2024-11-27 07:28:26.599948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.498 [2024-11-27 07:28:26.600115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.498 [2024-11-27 07:28:26.600277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.498 [2024-11-27 07:28:26.600287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.498 [2024-11-27 07:28:26.600293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.498 [2024-11-27 07:28:26.600298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.498 [2024-11-27 07:28:26.611972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.498 [2024-11-27 07:28:26.612554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.498 [2024-11-27 07:28:26.612584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.498 [2024-11-27 07:28:26.612592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.498 [2024-11-27 07:28:26.612759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.498 [2024-11-27 07:28:26.612913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.498 [2024-11-27 07:28:26.612920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.498 [2024-11-27 07:28:26.612925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.498 [2024-11-27 07:28:26.612930] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.498 [2024-11-27 07:28:26.624609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.498 [2024-11-27 07:28:26.625200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.498 [2024-11-27 07:28:26.625230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.498 [2024-11-27 07:28:26.625238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.498 [2024-11-27 07:28:26.625408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.498 [2024-11-27 07:28:26.625562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.498 [2024-11-27 07:28:26.625568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.498 [2024-11-27 07:28:26.625574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.498 [2024-11-27 07:28:26.625579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.498 [2024-11-27 07:28:26.637253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.498 [2024-11-27 07:28:26.637826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.498 [2024-11-27 07:28:26.637856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.498 [2024-11-27 07:28:26.637865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.498 [2024-11-27 07:28:26.638032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.498 [2024-11-27 07:28:26.638193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.498 [2024-11-27 07:28:26.638200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.498 [2024-11-27 07:28:26.638206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.498 [2024-11-27 07:28:26.638212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.498 [2024-11-27 07:28:26.649914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.498 [2024-11-27 07:28:26.650497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.498 [2024-11-27 07:28:26.650527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.498 [2024-11-27 07:28:26.650536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.498 [2024-11-27 07:28:26.650703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.498 [2024-11-27 07:28:26.650857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.498 [2024-11-27 07:28:26.650863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.498 [2024-11-27 07:28:26.650869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.498 [2024-11-27 07:28:26.650874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.498 [2024-11-27 07:28:26.662562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.498 [2024-11-27 07:28:26.663138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.498 [2024-11-27 07:28:26.663174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.498 [2024-11-27 07:28:26.663183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.498 [2024-11-27 07:28:26.663351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.498 [2024-11-27 07:28:26.663506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.498 [2024-11-27 07:28:26.663512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.498 [2024-11-27 07:28:26.663518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.498 [2024-11-27 07:28:26.663523] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.498 [2024-11-27 07:28:26.675223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.498 [2024-11-27 07:28:26.675803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.498 [2024-11-27 07:28:26.675833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.498 [2024-11-27 07:28:26.675842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.498 [2024-11-27 07:28:26.676008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.498 [2024-11-27 07:28:26.676169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.498 [2024-11-27 07:28:26.676176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.498 [2024-11-27 07:28:26.676182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.498 [2024-11-27 07:28:26.676187] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.498 [2024-11-27 07:28:26.687892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.498 [2024-11-27 07:28:26.688384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.498 [2024-11-27 07:28:26.688418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.498 [2024-11-27 07:28:26.688426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.498 [2024-11-27 07:28:26.688593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.498 [2024-11-27 07:28:26.688748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.498 [2024-11-27 07:28:26.688754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.498 [2024-11-27 07:28:26.688759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.498 [2024-11-27 07:28:26.688765] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.763 [2024-11-27 07:28:26.700609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.763 [2024-11-27 07:28:26.701106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.763 [2024-11-27 07:28:26.701120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.763 [2024-11-27 07:28:26.701126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.763 [2024-11-27 07:28:26.701284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.763 [2024-11-27 07:28:26.701437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.763 [2024-11-27 07:28:26.701442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.763 [2024-11-27 07:28:26.701448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.763 [2024-11-27 07:28:26.701452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.763 [2024-11-27 07:28:26.713316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.763 [2024-11-27 07:28:26.713773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.763 [2024-11-27 07:28:26.713786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.763 [2024-11-27 07:28:26.713792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.763 [2024-11-27 07:28:26.713944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.763 [2024-11-27 07:28:26.714095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.763 [2024-11-27 07:28:26.714102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.763 [2024-11-27 07:28:26.714109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.763 [2024-11-27 07:28:26.714116] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.763 [2024-11-27 07:28:26.725955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.763 [2024-11-27 07:28:26.726446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.763 [2024-11-27 07:28:26.726459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.763 [2024-11-27 07:28:26.726465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.763 [2024-11-27 07:28:26.726620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.763 [2024-11-27 07:28:26.726771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.763 [2024-11-27 07:28:26.726777] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.763 [2024-11-27 07:28:26.726781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.763 [2024-11-27 07:28:26.726786] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.763 [2024-11-27 07:28:26.738637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.763 [2024-11-27 07:28:26.739124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.763 [2024-11-27 07:28:26.739137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.763 [2024-11-27 07:28:26.739142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.763 [2024-11-27 07:28:26.739298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.763 [2024-11-27 07:28:26.739450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.763 [2024-11-27 07:28:26.739456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.763 [2024-11-27 07:28:26.739461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.763 [2024-11-27 07:28:26.739466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.763 [2024-11-27 07:28:26.751359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.763 [2024-11-27 07:28:26.751929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.763 [2024-11-27 07:28:26.751959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.763 [2024-11-27 07:28:26.751968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.763 [2024-11-27 07:28:26.752135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.763 [2024-11-27 07:28:26.752297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.763 [2024-11-27 07:28:26.752304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.763 [2024-11-27 07:28:26.752310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.763 [2024-11-27 07:28:26.752316] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.763 [2024-11-27 07:28:26.764010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.763 [2024-11-27 07:28:26.764515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.763 [2024-11-27 07:28:26.764600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.763 [2024-11-27 07:28:26.764608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.763 [2024-11-27 07:28:26.764781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.763 [2024-11-27 07:28:26.764935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.763 [2024-11-27 07:28:26.764945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.763 [2024-11-27 07:28:26.764951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.763 [2024-11-27 07:28:26.764957] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.763 [2024-11-27 07:28:26.776671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.763 [2024-11-27 07:28:26.777224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.763 [2024-11-27 07:28:26.777254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.763 [2024-11-27 07:28:26.777263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.763 [2024-11-27 07:28:26.777433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.763 [2024-11-27 07:28:26.777587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.764 [2024-11-27 07:28:26.777593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.764 [2024-11-27 07:28:26.777599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.764 [2024-11-27 07:28:26.777604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.764 [2024-11-27 07:28:26.789317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.764 [2024-11-27 07:28:26.789812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.764 [2024-11-27 07:28:26.789842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.764 [2024-11-27 07:28:26.789850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.764 [2024-11-27 07:28:26.790017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.764 [2024-11-27 07:28:26.790178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.764 [2024-11-27 07:28:26.790185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.764 [2024-11-27 07:28:26.790190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.764 [2024-11-27 07:28:26.790196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.764 [2024-11-27 07:28:26.802028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.764 [2024-11-27 07:28:26.802492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.764 [2024-11-27 07:28:26.802507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.764 [2024-11-27 07:28:26.802513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.764 [2024-11-27 07:28:26.802665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.764 [2024-11-27 07:28:26.802817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.764 [2024-11-27 07:28:26.802822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.764 [2024-11-27 07:28:26.802827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.764 [2024-11-27 07:28:26.802832] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.764 [2024-11-27 07:28:26.814672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.764 [2024-11-27 07:28:26.815163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.764 [2024-11-27 07:28:26.815176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.764 [2024-11-27 07:28:26.815181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.764 [2024-11-27 07:28:26.815333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.764 [2024-11-27 07:28:26.815485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.764 [2024-11-27 07:28:26.815491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.764 [2024-11-27 07:28:26.815495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.764 [2024-11-27 07:28:26.815500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.764 [2024-11-27 07:28:26.827318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.764 [2024-11-27 07:28:26.827805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.764 [2024-11-27 07:28:26.827817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.764 [2024-11-27 07:28:26.827822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.764 [2024-11-27 07:28:26.827974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.764 [2024-11-27 07:28:26.828126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.764 [2024-11-27 07:28:26.828131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.764 [2024-11-27 07:28:26.828136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.764 [2024-11-27 07:28:26.828141] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.764 [2024-11-27 07:28:26.839983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.764 [2024-11-27 07:28:26.840535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.764 [2024-11-27 07:28:26.840566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.764 [2024-11-27 07:28:26.840574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.764 [2024-11-27 07:28:26.840742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.764 [2024-11-27 07:28:26.840896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.764 [2024-11-27 07:28:26.840902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.764 [2024-11-27 07:28:26.840907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.764 [2024-11-27 07:28:26.840913] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.764 [2024-11-27 07:28:26.852616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.764 [2024-11-27 07:28:26.853111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.764 [2024-11-27 07:28:26.853130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.764 [2024-11-27 07:28:26.853136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.764 [2024-11-27 07:28:26.853293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.764 [2024-11-27 07:28:26.853445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.764 [2024-11-27 07:28:26.853451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.764 [2024-11-27 07:28:26.853456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.764 [2024-11-27 07:28:26.853461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.764 [2024-11-27 07:28:26.865297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.764 [2024-11-27 07:28:26.865746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.764 [2024-11-27 07:28:26.865759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.764 [2024-11-27 07:28:26.865764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.764 [2024-11-27 07:28:26.865916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.764 [2024-11-27 07:28:26.866066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.764 [2024-11-27 07:28:26.866072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.764 [2024-11-27 07:28:26.866077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.764 [2024-11-27 07:28:26.866081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.764 [2024-11-27 07:28:26.878059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.764 [2024-11-27 07:28:26.878625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.764 [2024-11-27 07:28:26.878656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.764 [2024-11-27 07:28:26.878665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.764 [2024-11-27 07:28:26.878834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.764 [2024-11-27 07:28:26.878989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.764 [2024-11-27 07:28:26.878995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.764 [2024-11-27 07:28:26.879000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.764 [2024-11-27 07:28:26.879006] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.764 [2024-11-27 07:28:26.890717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.764 [2024-11-27 07:28:26.891218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.764 [2024-11-27 07:28:26.891233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.764 [2024-11-27 07:28:26.891239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.764 [2024-11-27 07:28:26.891396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.764 [2024-11-27 07:28:26.891547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.764 [2024-11-27 07:28:26.891553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.764 [2024-11-27 07:28:26.891558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.764 [2024-11-27 07:28:26.891563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.764 [2024-11-27 07:28:26.903401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.764 [2024-11-27 07:28:26.903968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.764 [2024-11-27 07:28:26.903999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.765 [2024-11-27 07:28:26.904007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.765 [2024-11-27 07:28:26.904181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.765 [2024-11-27 07:28:26.904336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.765 [2024-11-27 07:28:26.904342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.765 [2024-11-27 07:28:26.904348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.765 [2024-11-27 07:28:26.904353] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.765 [2024-11-27 07:28:26.916049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.765 [2024-11-27 07:28:26.916527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.765 [2024-11-27 07:28:26.916542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.765 [2024-11-27 07:28:26.916548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.765 [2024-11-27 07:28:26.916699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.765 [2024-11-27 07:28:26.916851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.765 [2024-11-27 07:28:26.916856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.765 [2024-11-27 07:28:26.916861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.765 [2024-11-27 07:28:26.916866] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.765 [2024-11-27 07:28:26.928699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.765 [2024-11-27 07:28:26.929186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.765 [2024-11-27 07:28:26.929200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.765 [2024-11-27 07:28:26.929205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.765 [2024-11-27 07:28:26.929356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.765 [2024-11-27 07:28:26.929507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.765 [2024-11-27 07:28:26.929516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.765 [2024-11-27 07:28:26.929522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.765 [2024-11-27 07:28:26.929526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.765 [2024-11-27 07:28:26.941371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.765 [2024-11-27 07:28:26.941826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.765 [2024-11-27 07:28:26.941839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.765 [2024-11-27 07:28:26.941844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.765 [2024-11-27 07:28:26.941995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.765 [2024-11-27 07:28:26.942146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.765 [2024-11-27 07:28:26.942151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.765 [2024-11-27 07:28:26.942156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.765 [2024-11-27 07:28:26.942166] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:15.765 [2024-11-27 07:28:26.953991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:15.765 [2024-11-27 07:28:26.954447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.765 [2024-11-27 07:28:26.954460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:15.765 [2024-11-27 07:28:26.954465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:15.765 [2024-11-27 07:28:26.954616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:15.765 [2024-11-27 07:28:26.954767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:15.765 [2024-11-27 07:28:26.954774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:15.765 [2024-11-27 07:28:26.954779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:15.765 [2024-11-27 07:28:26.954783] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.029 [2024-11-27 07:28:26.966755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.029 [2024-11-27 07:28:26.967104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.029 [2024-11-27 07:28:26.967118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.029 [2024-11-27 07:28:26.967123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.029 [2024-11-27 07:28:26.967280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.029 [2024-11-27 07:28:26.967432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.029 [2024-11-27 07:28:26.967438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.029 [2024-11-27 07:28:26.967445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.029 [2024-11-27 07:28:26.967450] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.029 [2024-11-27 07:28:26.979455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.029 [2024-11-27 07:28:26.979941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.029 [2024-11-27 07:28:26.979955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.029 [2024-11-27 07:28:26.979962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.029 [2024-11-27 07:28:26.980114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.030 [2024-11-27 07:28:26.980271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.030 [2024-11-27 07:28:26.980278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.030 [2024-11-27 07:28:26.980282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.030 [2024-11-27 07:28:26.980287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.030 [2024-11-27 07:28:26.992123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.030 [2024-11-27 07:28:26.992620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.030 [2024-11-27 07:28:26.992632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.030 [2024-11-27 07:28:26.992638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.030 [2024-11-27 07:28:26.992789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.030 [2024-11-27 07:28:26.992940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.030 [2024-11-27 07:28:26.992947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.030 [2024-11-27 07:28:26.992952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.030 [2024-11-27 07:28:26.992957] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.030 [2024-11-27 07:28:27.004787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.030 [2024-11-27 07:28:27.005388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.030 [2024-11-27 07:28:27.005419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.030 [2024-11-27 07:28:27.005428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.030 [2024-11-27 07:28:27.005595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.030 [2024-11-27 07:28:27.005749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.030 [2024-11-27 07:28:27.005755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.030 [2024-11-27 07:28:27.005761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.030 [2024-11-27 07:28:27.005766] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.030 [2024-11-27 07:28:27.017449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.030 [2024-11-27 07:28:27.017946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.030 [2024-11-27 07:28:27.017964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.030 [2024-11-27 07:28:27.017970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.030 [2024-11-27 07:28:27.018122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.030 [2024-11-27 07:28:27.018279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.030 [2024-11-27 07:28:27.018285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.030 [2024-11-27 07:28:27.018290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.030 [2024-11-27 07:28:27.018295] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.030 [2024-11-27 07:28:27.030123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.030 [2024-11-27 07:28:27.030572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.030 [2024-11-27 07:28:27.030585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.030 [2024-11-27 07:28:27.030591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.030 [2024-11-27 07:28:27.030742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.030 [2024-11-27 07:28:27.030893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.030 [2024-11-27 07:28:27.030899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.030 [2024-11-27 07:28:27.030904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.030 [2024-11-27 07:28:27.030908] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.030 [2024-11-27 07:28:27.042751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.030 [2024-11-27 07:28:27.043245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.030 [2024-11-27 07:28:27.043259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.030 [2024-11-27 07:28:27.043264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.030 [2024-11-27 07:28:27.043415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.030 [2024-11-27 07:28:27.043567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.030 [2024-11-27 07:28:27.043572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.030 [2024-11-27 07:28:27.043577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.030 [2024-11-27 07:28:27.043582] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.030 [2024-11-27 07:28:27.055417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.030 [2024-11-27 07:28:27.055905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.030 [2024-11-27 07:28:27.055917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.030 [2024-11-27 07:28:27.055922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.030 [2024-11-27 07:28:27.056080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.030 [2024-11-27 07:28:27.056236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.030 [2024-11-27 07:28:27.056243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.030 [2024-11-27 07:28:27.056248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.030 [2024-11-27 07:28:27.056252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.030 [2024-11-27 07:28:27.068079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.030 [2024-11-27 07:28:27.068547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.030 [2024-11-27 07:28:27.068559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.030 [2024-11-27 07:28:27.068565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.030 [2024-11-27 07:28:27.068716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.030 [2024-11-27 07:28:27.068867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.030 [2024-11-27 07:28:27.068873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.030 [2024-11-27 07:28:27.068878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.030 [2024-11-27 07:28:27.068883] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.030 [2024-11-27 07:28:27.080720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.030 [2024-11-27 07:28:27.081239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.030 [2024-11-27 07:28:27.081252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.030 [2024-11-27 07:28:27.081258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.030 [2024-11-27 07:28:27.081409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.030 [2024-11-27 07:28:27.081561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.030 [2024-11-27 07:28:27.081567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.030 [2024-11-27 07:28:27.081572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.030 [2024-11-27 07:28:27.081576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.030 5775.80 IOPS, 22.56 MiB/s [2024-11-27T06:28:27.235Z] [2024-11-27 07:28:27.093398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.030 [2024-11-27 07:28:27.093964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.030 [2024-11-27 07:28:27.093994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.030 [2024-11-27 07:28:27.094003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.030 [2024-11-27 07:28:27.094177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.030 [2024-11-27 07:28:27.094332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.030 [2024-11-27 07:28:27.094342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.030 [2024-11-27 07:28:27.094348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.030 [2024-11-27 07:28:27.094353] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.030 [2024-11-27 07:28:27.106043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.030 [2024-11-27 07:28:27.106545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.030 [2024-11-27 07:28:27.106562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.030 [2024-11-27 07:28:27.106568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.031 [2024-11-27 07:28:27.106720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.031 [2024-11-27 07:28:27.106871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.031 [2024-11-27 07:28:27.106877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.031 [2024-11-27 07:28:27.106882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.031 [2024-11-27 07:28:27.106887] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.031 [2024-11-27 07:28:27.118722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.031 [2024-11-27 07:28:27.119191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.031 [2024-11-27 07:28:27.119204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.031 [2024-11-27 07:28:27.119210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.031 [2024-11-27 07:28:27.119361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.031 [2024-11-27 07:28:27.119512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.031 [2024-11-27 07:28:27.119518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.031 [2024-11-27 07:28:27.119523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.031 [2024-11-27 07:28:27.119527] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.031 [2024-11-27 07:28:27.131362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.031 [2024-11-27 07:28:27.131819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.031 [2024-11-27 07:28:27.131832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.031 [2024-11-27 07:28:27.131837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.031 [2024-11-27 07:28:27.131988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.031 [2024-11-27 07:28:27.132140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.031 [2024-11-27 07:28:27.132146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.031 [2024-11-27 07:28:27.132150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.031 [2024-11-27 07:28:27.132163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.031 [2024-11-27 07:28:27.144001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.031 [2024-11-27 07:28:27.144385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.031 [2024-11-27 07:28:27.144398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.031 [2024-11-27 07:28:27.144404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.031 [2024-11-27 07:28:27.144555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.031 [2024-11-27 07:28:27.144706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.031 [2024-11-27 07:28:27.144712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.031 [2024-11-27 07:28:27.144716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.031 [2024-11-27 07:28:27.144721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.031 [2024-11-27 07:28:27.156696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.031 [2024-11-27 07:28:27.157167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.031 [2024-11-27 07:28:27.157180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.031 [2024-11-27 07:28:27.157185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.031 [2024-11-27 07:28:27.157336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.031 [2024-11-27 07:28:27.157487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.031 [2024-11-27 07:28:27.157494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.031 [2024-11-27 07:28:27.157498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.031 [2024-11-27 07:28:27.157503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.031 [2024-11-27 07:28:27.169362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.031 [2024-11-27 07:28:27.169847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.031 [2024-11-27 07:28:27.169859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.031 [2024-11-27 07:28:27.169864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.031 [2024-11-27 07:28:27.170016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.031 [2024-11-27 07:28:27.170173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.031 [2024-11-27 07:28:27.170179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.031 [2024-11-27 07:28:27.170184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.031 [2024-11-27 07:28:27.170189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.031 [2024-11-27 07:28:27.182022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.031 [2024-11-27 07:28:27.182486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.031 [2024-11-27 07:28:27.182502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.031 [2024-11-27 07:28:27.182508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.031 [2024-11-27 07:28:27.182659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.031 [2024-11-27 07:28:27.182810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.031 [2024-11-27 07:28:27.182815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.031 [2024-11-27 07:28:27.182820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.031 [2024-11-27 07:28:27.182825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.031 [2024-11-27 07:28:27.194659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.031 [2024-11-27 07:28:27.194992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.031 [2024-11-27 07:28:27.195008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.031 [2024-11-27 07:28:27.195014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.031 [2024-11-27 07:28:27.195174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.031 [2024-11-27 07:28:27.195327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.031 [2024-11-27 07:28:27.195333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.031 [2024-11-27 07:28:27.195337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.031 [2024-11-27 07:28:27.195342] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.031 [2024-11-27 07:28:27.207320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.031 [2024-11-27 07:28:27.207805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.031 [2024-11-27 07:28:27.207818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.031 [2024-11-27 07:28:27.207823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.031 [2024-11-27 07:28:27.207975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.031 [2024-11-27 07:28:27.208125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.031 [2024-11-27 07:28:27.208131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.031 [2024-11-27 07:28:27.208136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.031 [2024-11-27 07:28:27.208140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.031 [2024-11-27 07:28:27.219975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.031 [2024-11-27 07:28:27.220426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.031 [2024-11-27 07:28:27.220439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.031 [2024-11-27 07:28:27.220444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.031 [2024-11-27 07:28:27.220598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.031 [2024-11-27 07:28:27.220750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.031 [2024-11-27 07:28:27.220756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.031 [2024-11-27 07:28:27.220762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.031 [2024-11-27 07:28:27.220766] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.294 [2024-11-27 07:28:27.232603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.294 [2024-11-27 07:28:27.233057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.294 [2024-11-27 07:28:27.233070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.294 [2024-11-27 07:28:27.233075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.294 [2024-11-27 07:28:27.233231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.294 [2024-11-27 07:28:27.233383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.294 [2024-11-27 07:28:27.233389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.294 [2024-11-27 07:28:27.233394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.294 [2024-11-27 07:28:27.233398] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.294 [2024-11-27 07:28:27.245244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.294 [2024-11-27 07:28:27.245731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.294 [2024-11-27 07:28:27.245744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.294 [2024-11-27 07:28:27.245749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.294 [2024-11-27 07:28:27.245900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.294 [2024-11-27 07:28:27.246051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.294 [2024-11-27 07:28:27.246056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.294 [2024-11-27 07:28:27.246061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.294 [2024-11-27 07:28:27.246066] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.294 [2024-11-27 07:28:27.257897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.294 [2024-11-27 07:28:27.258342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.294 [2024-11-27 07:28:27.258355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.294 [2024-11-27 07:28:27.258361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.294 [2024-11-27 07:28:27.258511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.294 [2024-11-27 07:28:27.258662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.294 [2024-11-27 07:28:27.258671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.294 [2024-11-27 07:28:27.258676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.294 [2024-11-27 07:28:27.258680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.294 [2024-11-27 07:28:27.270657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.294 [2024-11-27 07:28:27.271106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.294 [2024-11-27 07:28:27.271118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.294 [2024-11-27 07:28:27.271123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.294 [2024-11-27 07:28:27.271278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.294 [2024-11-27 07:28:27.271430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.294 [2024-11-27 07:28:27.271436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.294 [2024-11-27 07:28:27.271441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.294 [2024-11-27 07:28:27.271445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.294 [2024-11-27 07:28:27.283289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.294 [2024-11-27 07:28:27.283746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.294 [2024-11-27 07:28:27.283758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.294 [2024-11-27 07:28:27.283763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.294 [2024-11-27 07:28:27.283914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.294 [2024-11-27 07:28:27.284066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.294 [2024-11-27 07:28:27.284072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.294 [2024-11-27 07:28:27.284077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.294 [2024-11-27 07:28:27.284081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.294 [2024-11-27 07:28:27.295922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.294 [2024-11-27 07:28:27.296401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.294 [2024-11-27 07:28:27.296414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.294 [2024-11-27 07:28:27.296419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.294 [2024-11-27 07:28:27.296570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.294 [2024-11-27 07:28:27.296721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.295 [2024-11-27 07:28:27.296726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.295 [2024-11-27 07:28:27.296731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.295 [2024-11-27 07:28:27.296739] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.295 [2024-11-27 07:28:27.308561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.295 [2024-11-27 07:28:27.309046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.295 [2024-11-27 07:28:27.309058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.295 [2024-11-27 07:28:27.309063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.295 [2024-11-27 07:28:27.309219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.295 [2024-11-27 07:28:27.309370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.295 [2024-11-27 07:28:27.309376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.295 [2024-11-27 07:28:27.309381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.295 [2024-11-27 07:28:27.309386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.295 [2024-11-27 07:28:27.321220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.295 [2024-11-27 07:28:27.321679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.295 [2024-11-27 07:28:27.321691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.295 [2024-11-27 07:28:27.321696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.295 [2024-11-27 07:28:27.321848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.295 [2024-11-27 07:28:27.321999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.295 [2024-11-27 07:28:27.322005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.295 [2024-11-27 07:28:27.322009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.295 [2024-11-27 07:28:27.322014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.295 [2024-11-27 07:28:27.333845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.295 [2024-11-27 07:28:27.334331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.295 [2024-11-27 07:28:27.334343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.295 [2024-11-27 07:28:27.334349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.295 [2024-11-27 07:28:27.334500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.295 [2024-11-27 07:28:27.334651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.295 [2024-11-27 07:28:27.334656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.295 [2024-11-27 07:28:27.334661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.295 [2024-11-27 07:28:27.334666] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.295 [2024-11-27 07:28:27.346509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.295 [2024-11-27 07:28:27.346994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.295 [2024-11-27 07:28:27.347010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.295 [2024-11-27 07:28:27.347015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.295 [2024-11-27 07:28:27.347171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.295 [2024-11-27 07:28:27.347322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.295 [2024-11-27 07:28:27.347328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.295 [2024-11-27 07:28:27.347333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.295 [2024-11-27 07:28:27.347337] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.295 [2024-11-27 07:28:27.359172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.295 [2024-11-27 07:28:27.359705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.295 [2024-11-27 07:28:27.359736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.295 [2024-11-27 07:28:27.359744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.295 [2024-11-27 07:28:27.359912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.295 [2024-11-27 07:28:27.360066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.295 [2024-11-27 07:28:27.360072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.295 [2024-11-27 07:28:27.360077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.295 [2024-11-27 07:28:27.360083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.295 [2024-11-27 07:28:27.371923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.295 [2024-11-27 07:28:27.372371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.295 [2024-11-27 07:28:27.372387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.295 [2024-11-27 07:28:27.372393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.295 [2024-11-27 07:28:27.372545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.295 [2024-11-27 07:28:27.372696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.295 [2024-11-27 07:28:27.372702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.295 [2024-11-27 07:28:27.372706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.295 [2024-11-27 07:28:27.372711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.295 [2024-11-27 07:28:27.384685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.295 [2024-11-27 07:28:27.385143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.295 [2024-11-27 07:28:27.385164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.295 [2024-11-27 07:28:27.385170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.295 [2024-11-27 07:28:27.385326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.295 [2024-11-27 07:28:27.385477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.295 [2024-11-27 07:28:27.385483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.295 [2024-11-27 07:28:27.385488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.295 [2024-11-27 07:28:27.385493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.295 [2024-11-27 07:28:27.397334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.295 [2024-11-27 07:28:27.397901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.295 [2024-11-27 07:28:27.397931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.295 [2024-11-27 07:28:27.397940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.295 [2024-11-27 07:28:27.398107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.295 [2024-11-27 07:28:27.398270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.295 [2024-11-27 07:28:27.398277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.295 [2024-11-27 07:28:27.398283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.295 [2024-11-27 07:28:27.398289] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.295 [2024-11-27 07:28:27.409986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.295 [2024-11-27 07:28:27.410459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.295 [2024-11-27 07:28:27.410475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.295 [2024-11-27 07:28:27.410481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.295 [2024-11-27 07:28:27.410632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.295 [2024-11-27 07:28:27.410783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.295 [2024-11-27 07:28:27.410790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.295 [2024-11-27 07:28:27.410795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.295 [2024-11-27 07:28:27.410800] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.295 [2024-11-27 07:28:27.422615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.295 [2024-11-27 07:28:27.423101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.295 [2024-11-27 07:28:27.423113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.295 [2024-11-27 07:28:27.423118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.295 [2024-11-27 07:28:27.423274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.296 [2024-11-27 07:28:27.423426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.296 [2024-11-27 07:28:27.423435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.296 [2024-11-27 07:28:27.423440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.296 [2024-11-27 07:28:27.423445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.296 [2024-11-27 07:28:27.435258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.296 [2024-11-27 07:28:27.435660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.296 [2024-11-27 07:28:27.435690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.296 [2024-11-27 07:28:27.435699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.296 [2024-11-27 07:28:27.435866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.296 [2024-11-27 07:28:27.436020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.296 [2024-11-27 07:28:27.436026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.296 [2024-11-27 07:28:27.436031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.296 [2024-11-27 07:28:27.436037] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.296 [2024-11-27 07:28:27.447878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.296 [2024-11-27 07:28:27.448462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.296 [2024-11-27 07:28:27.448492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.296 [2024-11-27 07:28:27.448501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.296 [2024-11-27 07:28:27.448668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.296 [2024-11-27 07:28:27.448822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.296 [2024-11-27 07:28:27.448828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.296 [2024-11-27 07:28:27.448833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.296 [2024-11-27 07:28:27.448839] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.296 [2024-11-27 07:28:27.460519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.296 [2024-11-27 07:28:27.461124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.296 [2024-11-27 07:28:27.461154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.296 [2024-11-27 07:28:27.461169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.296 [2024-11-27 07:28:27.461336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.296 [2024-11-27 07:28:27.461490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.296 [2024-11-27 07:28:27.461497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.296 [2024-11-27 07:28:27.461503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.296 [2024-11-27 07:28:27.461512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.296 [2024-11-27 07:28:27.473187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.296 [2024-11-27 07:28:27.473776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.296 [2024-11-27 07:28:27.473805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.296 [2024-11-27 07:28:27.473814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.296 [2024-11-27 07:28:27.473982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.296 [2024-11-27 07:28:27.474137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.296 [2024-11-27 07:28:27.474143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.296 [2024-11-27 07:28:27.474150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.296 [2024-11-27 07:28:27.474156] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.296 [2024-11-27 07:28:27.485860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.296 [2024-11-27 07:28:27.486470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.296 [2024-11-27 07:28:27.486501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.296 [2024-11-27 07:28:27.486509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.296 [2024-11-27 07:28:27.486676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.296 [2024-11-27 07:28:27.486831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.296 [2024-11-27 07:28:27.486838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.296 [2024-11-27 07:28:27.486844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.296 [2024-11-27 07:28:27.486850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.558 [2024-11-27 07:28:27.498567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.558 [2024-11-27 07:28:27.499068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.558 [2024-11-27 07:28:27.499083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.558 [2024-11-27 07:28:27.499089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.558 [2024-11-27 07:28:27.499244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.558 [2024-11-27 07:28:27.499397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.558 [2024-11-27 07:28:27.499402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.558 [2024-11-27 07:28:27.499407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.558 [2024-11-27 07:28:27.499412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.558 [2024-11-27 07:28:27.511238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.558 [2024-11-27 07:28:27.511712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.558 [2024-11-27 07:28:27.511730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.558 [2024-11-27 07:28:27.511735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.558 [2024-11-27 07:28:27.511887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.558 [2024-11-27 07:28:27.512038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.558 [2024-11-27 07:28:27.512044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.558 [2024-11-27 07:28:27.512049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.558 [2024-11-27 07:28:27.512053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.558 [2024-11-27 07:28:27.523887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.558 [2024-11-27 07:28:27.524378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.558 [2024-11-27 07:28:27.524392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.558 [2024-11-27 07:28:27.524397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.558 [2024-11-27 07:28:27.524549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.558 [2024-11-27 07:28:27.524700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.558 [2024-11-27 07:28:27.524706] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.558 [2024-11-27 07:28:27.524711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.558 [2024-11-27 07:28:27.524716] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.558 [2024-11-27 07:28:27.536534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.558 [2024-11-27 07:28:27.536979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.558 [2024-11-27 07:28:27.536992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.558 [2024-11-27 07:28:27.536997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.558 [2024-11-27 07:28:27.537149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.558 [2024-11-27 07:28:27.537313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.558 [2024-11-27 07:28:27.537319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.558 [2024-11-27 07:28:27.537324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.558 [2024-11-27 07:28:27.537329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.558 [2024-11-27 07:28:27.549149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.558 [2024-11-27 07:28:27.549691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.558 [2024-11-27 07:28:27.549721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.558 [2024-11-27 07:28:27.549730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.558 [2024-11-27 07:28:27.549900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.558 [2024-11-27 07:28:27.550054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.558 [2024-11-27 07:28:27.550060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.558 [2024-11-27 07:28:27.550066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.558 [2024-11-27 07:28:27.550071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.558 [2024-11-27 07:28:27.561896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.558 [2024-11-27 07:28:27.562387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.558 [2024-11-27 07:28:27.562417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.558 [2024-11-27 07:28:27.562426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.558 [2024-11-27 07:28:27.562594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.558 [2024-11-27 07:28:27.562749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.558 [2024-11-27 07:28:27.562755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.558 [2024-11-27 07:28:27.562761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.558 [2024-11-27 07:28:27.562766] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.559 [2024-11-27 07:28:27.574616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.559 [2024-11-27 07:28:27.575153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.559 [2024-11-27 07:28:27.575189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.559 [2024-11-27 07:28:27.575198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.559 [2024-11-27 07:28:27.575367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.559 [2024-11-27 07:28:27.575521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.559 [2024-11-27 07:28:27.575527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.559 [2024-11-27 07:28:27.575532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.559 [2024-11-27 07:28:27.575538] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.559 [2024-11-27 07:28:27.587381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.559 [2024-11-27 07:28:27.587949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.559 [2024-11-27 07:28:27.587979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.559 [2024-11-27 07:28:27.587988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.559 [2024-11-27 07:28:27.588155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.559 [2024-11-27 07:28:27.588325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.559 [2024-11-27 07:28:27.588335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.559 [2024-11-27 07:28:27.588341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.559 [2024-11-27 07:28:27.588347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.559 [2024-11-27 07:28:27.600044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.559 [2024-11-27 07:28:27.600505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.559 [2024-11-27 07:28:27.600520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.559 [2024-11-27 07:28:27.600526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.559 [2024-11-27 07:28:27.600678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.559 [2024-11-27 07:28:27.600829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.559 [2024-11-27 07:28:27.600836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.559 [2024-11-27 07:28:27.600842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.559 [2024-11-27 07:28:27.600848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.559 [2024-11-27 07:28:27.612688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.559 [2024-11-27 07:28:27.613024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.559 [2024-11-27 07:28:27.613038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.559 [2024-11-27 07:28:27.613044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.559 [2024-11-27 07:28:27.613201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.559 [2024-11-27 07:28:27.613353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.559 [2024-11-27 07:28:27.613358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.559 [2024-11-27 07:28:27.613363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.559 [2024-11-27 07:28:27.613368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.559 [2024-11-27 07:28:27.625333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.559 [2024-11-27 07:28:27.625818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.559 [2024-11-27 07:28:27.625830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.559 [2024-11-27 07:28:27.625835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.559 [2024-11-27 07:28:27.625987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.559 [2024-11-27 07:28:27.626138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.559 [2024-11-27 07:28:27.626144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.559 [2024-11-27 07:28:27.626149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.559 [2024-11-27 07:28:27.626163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.559 [2024-11-27 07:28:27.637997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.559 [2024-11-27 07:28:27.638462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.559 [2024-11-27 07:28:27.638475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.559 [2024-11-27 07:28:27.638481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.559 [2024-11-27 07:28:27.638632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.559 [2024-11-27 07:28:27.638783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.559 [2024-11-27 07:28:27.638788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.559 [2024-11-27 07:28:27.638793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.559 [2024-11-27 07:28:27.638797] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.559 [2024-11-27 07:28:27.650760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.559 [2024-11-27 07:28:27.651204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.559 [2024-11-27 07:28:27.651217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.559 [2024-11-27 07:28:27.651222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.559 [2024-11-27 07:28:27.651373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.559 [2024-11-27 07:28:27.651525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.559 [2024-11-27 07:28:27.651530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.559 [2024-11-27 07:28:27.651535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.559 [2024-11-27 07:28:27.651539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.559 [2024-11-27 07:28:27.663516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.559 [2024-11-27 07:28:27.664099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.559 [2024-11-27 07:28:27.664129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.559 [2024-11-27 07:28:27.664138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.559 [2024-11-27 07:28:27.664312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.559 [2024-11-27 07:28:27.664467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.559 [2024-11-27 07:28:27.664473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.559 [2024-11-27 07:28:27.664479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.559 [2024-11-27 07:28:27.664485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.559 [2024-11-27 07:28:27.676170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.559 [2024-11-27 07:28:27.676653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.559 [2024-11-27 07:28:27.676689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.559 [2024-11-27 07:28:27.676698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.559 [2024-11-27 07:28:27.676864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.559 [2024-11-27 07:28:27.677018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.559 [2024-11-27 07:28:27.677024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.559 [2024-11-27 07:28:27.677030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.559 [2024-11-27 07:28:27.677035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2578085 Killed "${NVMF_APP[@]}" "$@" 00:33:16.559 07:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:33:16.559 07:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:16.559 07:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:16.559 07:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:16.559 07:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:16.559 07:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2579799 00:33:16.559 07:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2579799 00:33:16.559 07:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:16.559 [2024-11-27 07:28:27.688887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.560 07:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2579799 ']' 00:33:16.560 07:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:16.560 07:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:16.560 [2024-11-27 07:28:27.689504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.560 [2024-11-27 07:28:27.689534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.560 [2024-11-27 07:28:27.689543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.560 07:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:16.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:16.560 [2024-11-27 07:28:27.689710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.560 [2024-11-27 07:28:27.689865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.560 [2024-11-27 07:28:27.689871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.560 [2024-11-27 07:28:27.689876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.560 [2024-11-27 07:28:27.689882] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.560 07:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:16.560 07:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:16.560 [2024-11-27 07:28:27.701566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.560 [2024-11-27 07:28:27.702073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.560 [2024-11-27 07:28:27.702088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.560 [2024-11-27 07:28:27.702093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.560 [2024-11-27 07:28:27.702250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.560 [2024-11-27 07:28:27.702402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.560 [2024-11-27 07:28:27.702407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.560 [2024-11-27 07:28:27.702412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.560 [2024-11-27 07:28:27.702417] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.560 [2024-11-27 07:28:27.714232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.560 [2024-11-27 07:28:27.714787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.560 [2024-11-27 07:28:27.714818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.560 [2024-11-27 07:28:27.714827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.560 [2024-11-27 07:28:27.714994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.560 [2024-11-27 07:28:27.715148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.560 [2024-11-27 07:28:27.715154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.560 [2024-11-27 07:28:27.715168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.560 [2024-11-27 07:28:27.715174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.560 [2024-11-27 07:28:27.726855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.560 [2024-11-27 07:28:27.727269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.560 [2024-11-27 07:28:27.727299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.560 [2024-11-27 07:28:27.727307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.560 [2024-11-27 07:28:27.727477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.560 [2024-11-27 07:28:27.727631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.560 [2024-11-27 07:28:27.727638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.560 [2024-11-27 07:28:27.727643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.560 [2024-11-27 07:28:27.727650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.560 [2024-11-27 07:28:27.739500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.560 [2024-11-27 07:28:27.739890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.560 [2024-11-27 07:28:27.739920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.560 [2024-11-27 07:28:27.739933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.560 [2024-11-27 07:28:27.740104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.560 [2024-11-27 07:28:27.740266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.560 [2024-11-27 07:28:27.740273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.560 [2024-11-27 07:28:27.740278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.560 [2024-11-27 07:28:27.740284] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.560 [2024-11-27 07:28:27.740532] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:33:16.560 [2024-11-27 07:28:27.740578] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:16.560 [2024-11-27 07:28:27.752255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.560 [2024-11-27 07:28:27.752849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.560 [2024-11-27 07:28:27.752878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.560 [2024-11-27 07:28:27.752887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.560 [2024-11-27 07:28:27.753058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.560 [2024-11-27 07:28:27.753217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.560 [2024-11-27 07:28:27.753224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.560 [2024-11-27 07:28:27.753230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.560 [2024-11-27 07:28:27.753236] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.822 [2024-11-27 07:28:27.764919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.822 [2024-11-27 07:28:27.765429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.822 [2024-11-27 07:28:27.765445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.822 [2024-11-27 07:28:27.765451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.822 [2024-11-27 07:28:27.765603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.822 [2024-11-27 07:28:27.765755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.822 [2024-11-27 07:28:27.765760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.822 [2024-11-27 07:28:27.765766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.822 [2024-11-27 07:28:27.765771] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.822 [2024-11-27 07:28:27.777639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.822 [2024-11-27 07:28:27.778238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.822 [2024-11-27 07:28:27.778268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.822 [2024-11-27 07:28:27.778281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.822 [2024-11-27 07:28:27.778451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.822 [2024-11-27 07:28:27.778606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.822 [2024-11-27 07:28:27.778612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.822 [2024-11-27 07:28:27.778618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.822 [2024-11-27 07:28:27.778624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.822 [2024-11-27 07:28:27.790324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.822 [2024-11-27 07:28:27.790926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.822 [2024-11-27 07:28:27.790956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.822 [2024-11-27 07:28:27.790965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.822 [2024-11-27 07:28:27.791133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.822 [2024-11-27 07:28:27.791294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.822 [2024-11-27 07:28:27.791301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.822 [2024-11-27 07:28:27.791307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.822 [2024-11-27 07:28:27.791313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.822 [2024-11-27 07:28:27.802983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.822 [2024-11-27 07:28:27.803554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.822 [2024-11-27 07:28:27.803583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.822 [2024-11-27 07:28:27.803592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.822 [2024-11-27 07:28:27.803760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.822 [2024-11-27 07:28:27.803914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.822 [2024-11-27 07:28:27.803920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.822 [2024-11-27 07:28:27.803925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.822 [2024-11-27 07:28:27.803931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.822 [2024-11-27 07:28:27.815617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.822 [2024-11-27 07:28:27.816086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.822 [2024-11-27 07:28:27.816116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.822 [2024-11-27 07:28:27.816125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.822 [2024-11-27 07:28:27.816300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.822 [2024-11-27 07:28:27.816459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.822 [2024-11-27 07:28:27.816465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.822 [2024-11-27 07:28:27.816471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.822 [2024-11-27 07:28:27.816476] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.822 [2024-11-27 07:28:27.828289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.822 [2024-11-27 07:28:27.828871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.822 [2024-11-27 07:28:27.828901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.822 [2024-11-27 07:28:27.828910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.822 [2024-11-27 07:28:27.829077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.822 [2024-11-27 07:28:27.829239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.822 [2024-11-27 07:28:27.829246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.823 [2024-11-27 07:28:27.829252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.823 [2024-11-27 07:28:27.829258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.823 [2024-11-27 07:28:27.831914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:16.823 [2024-11-27 07:28:27.840953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.823 [2024-11-27 07:28:27.841476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.823 [2024-11-27 07:28:27.841507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.823 [2024-11-27 07:28:27.841517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.823 [2024-11-27 07:28:27.841688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.823 [2024-11-27 07:28:27.841843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.823 [2024-11-27 07:28:27.841849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.823 [2024-11-27 07:28:27.841855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.823 [2024-11-27 07:28:27.841861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.823 [2024-11-27 07:28:27.853692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.823 [2024-11-27 07:28:27.854200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.823 [2024-11-27 07:28:27.854222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.823 [2024-11-27 07:28:27.854229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.823 [2024-11-27 07:28:27.854387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.823 [2024-11-27 07:28:27.854539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.823 [2024-11-27 07:28:27.854550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.823 [2024-11-27 07:28:27.854555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.823 [2024-11-27 07:28:27.854560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.823 [2024-11-27 07:28:27.861200] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:16.823 [2024-11-27 07:28:27.861222] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:16.823 [2024-11-27 07:28:27.861229] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:16.823 [2024-11-27 07:28:27.861235] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:16.823 [2024-11-27 07:28:27.861240] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:16.823 [2024-11-27 07:28:27.862381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:16.823 [2024-11-27 07:28:27.862593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:16.823 [2024-11-27 07:28:27.862594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:16.823 [2024-11-27 07:28:27.866393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.823 [2024-11-27 07:28:27.867037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.823 [2024-11-27 07:28:27.867067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.823 [2024-11-27 07:28:27.867076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.823 [2024-11-27 07:28:27.867251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.823 [2024-11-27 07:28:27.867407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.823 [2024-11-27 07:28:27.867413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.823 [2024-11-27 07:28:27.867418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.823 [2024-11-27 07:28:27.867424] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.823 [2024-11-27 07:28:27.879107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.823 [2024-11-27 07:28:27.879704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.823 [2024-11-27 07:28:27.879735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.823 [2024-11-27 07:28:27.879744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.823 [2024-11-27 07:28:27.879912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.823 [2024-11-27 07:28:27.880066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.823 [2024-11-27 07:28:27.880073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.823 [2024-11-27 07:28:27.880078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.823 [2024-11-27 07:28:27.880084] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.823 [2024-11-27 07:28:27.891778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.823 [2024-11-27 07:28:27.892386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.823 [2024-11-27 07:28:27.892417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.823 [2024-11-27 07:28:27.892430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.823 [2024-11-27 07:28:27.892598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.823 [2024-11-27 07:28:27.892753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.823 [2024-11-27 07:28:27.892759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.823 [2024-11-27 07:28:27.892764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.823 [2024-11-27 07:28:27.892770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.823 [2024-11-27 07:28:27.904459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.823 [2024-11-27 07:28:27.905101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.823 [2024-11-27 07:28:27.905131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.823 [2024-11-27 07:28:27.905139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.823 [2024-11-27 07:28:27.905313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.823 [2024-11-27 07:28:27.905468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.823 [2024-11-27 07:28:27.905474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.823 [2024-11-27 07:28:27.905480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.823 [2024-11-27 07:28:27.905486] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.823 [2024-11-27 07:28:27.917181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.823 [2024-11-27 07:28:27.917805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.823 [2024-11-27 07:28:27.917835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.823 [2024-11-27 07:28:27.917845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.823 [2024-11-27 07:28:27.918013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.823 [2024-11-27 07:28:27.918173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.823 [2024-11-27 07:28:27.918180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.823 [2024-11-27 07:28:27.918186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.823 [2024-11-27 07:28:27.918192] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.823 [2024-11-27 07:28:27.929872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.823 [2024-11-27 07:28:27.930467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.823 [2024-11-27 07:28:27.930497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.823 [2024-11-27 07:28:27.930506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.823 [2024-11-27 07:28:27.930674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.823 [2024-11-27 07:28:27.930832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.824 [2024-11-27 07:28:27.930838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.824 [2024-11-27 07:28:27.930844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.824 [2024-11-27 07:28:27.930849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.824 [2024-11-27 07:28:27.942557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.824 [2024-11-27 07:28:27.943166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.824 [2024-11-27 07:28:27.943197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.824 [2024-11-27 07:28:27.943206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.824 [2024-11-27 07:28:27.943375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.824 [2024-11-27 07:28:27.943529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.824 [2024-11-27 07:28:27.943535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.824 [2024-11-27 07:28:27.943541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.824 [2024-11-27 07:28:27.943547] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.824 [2024-11-27 07:28:27.955235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.824 [2024-11-27 07:28:27.955826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.824 [2024-11-27 07:28:27.955855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.824 [2024-11-27 07:28:27.955864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.824 [2024-11-27 07:28:27.956032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.824 [2024-11-27 07:28:27.956192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.824 [2024-11-27 07:28:27.956199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.824 [2024-11-27 07:28:27.956205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.824 [2024-11-27 07:28:27.956211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.824 [2024-11-27 07:28:27.967891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.824 [2024-11-27 07:28:27.968509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.824 [2024-11-27 07:28:27.968539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.824 [2024-11-27 07:28:27.968548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.824 [2024-11-27 07:28:27.968716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.824 [2024-11-27 07:28:27.968870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.824 [2024-11-27 07:28:27.968876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.824 [2024-11-27 07:28:27.968885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.824 [2024-11-27 07:28:27.968891] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.824 [2024-11-27 07:28:27.980578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.824 [2024-11-27 07:28:27.981063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.824 [2024-11-27 07:28:27.981079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.824 [2024-11-27 07:28:27.981085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.824 [2024-11-27 07:28:27.981242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.824 [2024-11-27 07:28:27.981394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.824 [2024-11-27 07:28:27.981400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.824 [2024-11-27 07:28:27.981405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.824 [2024-11-27 07:28:27.981410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.824 [2024-11-27 07:28:27.993245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.824 [2024-11-27 07:28:27.993703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.824 [2024-11-27 07:28:27.993717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.824 [2024-11-27 07:28:27.993723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.824 [2024-11-27 07:28:27.993875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.824 [2024-11-27 07:28:27.994026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.824 [2024-11-27 07:28:27.994032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.824 [2024-11-27 07:28:27.994036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.824 [2024-11-27 07:28:27.994041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.824 [2024-11-27 07:28:28.005982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.824 [2024-11-27 07:28:28.006523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.824 [2024-11-27 07:28:28.006553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.824 [2024-11-27 07:28:28.006562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.824 [2024-11-27 07:28:28.006729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.824 [2024-11-27 07:28:28.006883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.824 [2024-11-27 07:28:28.006890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.824 [2024-11-27 07:28:28.006895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.824 [2024-11-27 07:28:28.006901] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:16.824 [2024-11-27 07:28:28.018734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:16.824 [2024-11-27 07:28:28.019250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.824 [2024-11-27 07:28:28.019280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:16.824 [2024-11-27 07:28:28.019289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:16.824 [2024-11-27 07:28:28.019456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:16.824 [2024-11-27 07:28:28.019611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:16.824 [2024-11-27 07:28:28.019617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:16.824 [2024-11-27 07:28:28.019623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:16.824 [2024-11-27 07:28:28.019629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.086 [2024-11-27 07:28:28.031458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.086 [2024-11-27 07:28:28.031961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-11-27 07:28:28.031976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.086 [2024-11-27 07:28:28.031981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.086 [2024-11-27 07:28:28.032133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.086 [2024-11-27 07:28:28.032291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.086 [2024-11-27 07:28:28.032297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.086 [2024-11-27 07:28:28.032302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.086 [2024-11-27 07:28:28.032307] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.086 [2024-11-27 07:28:28.044129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.086 [2024-11-27 07:28:28.044701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-11-27 07:28:28.044731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.086 [2024-11-27 07:28:28.044740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.086 [2024-11-27 07:28:28.044907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.086 [2024-11-27 07:28:28.045061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.086 [2024-11-27 07:28:28.045067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.086 [2024-11-27 07:28:28.045073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.086 [2024-11-27 07:28:28.045078] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.086 [2024-11-27 07:28:28.056747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.086 [2024-11-27 07:28:28.057401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-11-27 07:28:28.057431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.086 [2024-11-27 07:28:28.057444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.086 [2024-11-27 07:28:28.057610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.086 [2024-11-27 07:28:28.057765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.086 [2024-11-27 07:28:28.057771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.086 [2024-11-27 07:28:28.057776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.086 [2024-11-27 07:28:28.057782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.086 [2024-11-27 07:28:28.069467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.086 [2024-11-27 07:28:28.069972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-11-27 07:28:28.070001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.086 [2024-11-27 07:28:28.070010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.086 [2024-11-27 07:28:28.070185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.086 [2024-11-27 07:28:28.070341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.086 [2024-11-27 07:28:28.070347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.086 [2024-11-27 07:28:28.070352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.087 [2024-11-27 07:28:28.070358] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.087 [2024-11-27 07:28:28.082192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.087 [2024-11-27 07:28:28.082794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-11-27 07:28:28.082824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.087 [2024-11-27 07:28:28.082832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.087 [2024-11-27 07:28:28.083000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.087 [2024-11-27 07:28:28.083155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.087 [2024-11-27 07:28:28.083167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.087 [2024-11-27 07:28:28.083173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.087 [2024-11-27 07:28:28.083179] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.087 4813.17 IOPS, 18.80 MiB/s [2024-11-27T06:28:28.292Z] [2024-11-27 07:28:28.095861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.087 [2024-11-27 07:28:28.096474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-11-27 07:28:28.096504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.087 [2024-11-27 07:28:28.096514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.087 [2024-11-27 07:28:28.096688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.087 [2024-11-27 07:28:28.096842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.087 [2024-11-27 07:28:28.096848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.087 [2024-11-27 07:28:28.096853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.087 [2024-11-27 07:28:28.096859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.087 [2024-11-27 07:28:28.108542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.087 [2024-11-27 07:28:28.109142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-11-27 07:28:28.109178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.087 [2024-11-27 07:28:28.109187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.087 [2024-11-27 07:28:28.109354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.087 [2024-11-27 07:28:28.109508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.087 [2024-11-27 07:28:28.109514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.087 [2024-11-27 07:28:28.109519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.087 [2024-11-27 07:28:28.109525] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.087 [2024-11-27 07:28:28.121199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.087 [2024-11-27 07:28:28.121647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-11-27 07:28:28.121677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.087 [2024-11-27 07:28:28.121686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.087 [2024-11-27 07:28:28.121853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.087 [2024-11-27 07:28:28.122008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.087 [2024-11-27 07:28:28.122014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.087 [2024-11-27 07:28:28.122019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.087 [2024-11-27 07:28:28.122025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.087 [2024-11-27 07:28:28.133843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.087 [2024-11-27 07:28:28.134346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-11-27 07:28:28.134376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.087 [2024-11-27 07:28:28.134384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.087 [2024-11-27 07:28:28.134554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.087 [2024-11-27 07:28:28.134709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.087 [2024-11-27 07:28:28.134715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.087 [2024-11-27 07:28:28.134724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.087 [2024-11-27 07:28:28.134730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.087 [2024-11-27 07:28:28.146568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.087 [2024-11-27 07:28:28.147174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-11-27 07:28:28.147204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.087 [2024-11-27 07:28:28.147213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.087 [2024-11-27 07:28:28.147382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.087 [2024-11-27 07:28:28.147536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.087 [2024-11-27 07:28:28.147542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.087 [2024-11-27 07:28:28.147547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.087 [2024-11-27 07:28:28.147553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.087 [2024-11-27 07:28:28.159236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.087 [2024-11-27 07:28:28.159823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-11-27 07:28:28.159852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.087 [2024-11-27 07:28:28.159861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.087 [2024-11-27 07:28:28.160029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.087 [2024-11-27 07:28:28.160191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.087 [2024-11-27 07:28:28.160198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.087 [2024-11-27 07:28:28.160203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.087 [2024-11-27 07:28:28.160209] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.087 [2024-11-27 07:28:28.171891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.087 [2024-11-27 07:28:28.172498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-11-27 07:28:28.172528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.087 [2024-11-27 07:28:28.172537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.087 [2024-11-27 07:28:28.172704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.087 [2024-11-27 07:28:28.172858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.087 [2024-11-27 07:28:28.172864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.087 [2024-11-27 07:28:28.172870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.087 [2024-11-27 07:28:28.172875] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.087 [2024-11-27 07:28:28.184566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.087 [2024-11-27 07:28:28.185110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-11-27 07:28:28.185139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.087 [2024-11-27 07:28:28.185148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.087 [2024-11-27 07:28:28.185324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.087 [2024-11-27 07:28:28.185478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.087 [2024-11-27 07:28:28.185484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.087 [2024-11-27 07:28:28.185490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.088 [2024-11-27 07:28:28.185495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.088 [2024-11-27 07:28:28.197326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.088 [2024-11-27 07:28:28.197834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-11-27 07:28:28.197849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.088 [2024-11-27 07:28:28.197855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.088 [2024-11-27 07:28:28.198007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.088 [2024-11-27 07:28:28.198164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.088 [2024-11-27 07:28:28.198170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.088 [2024-11-27 07:28:28.198175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.088 [2024-11-27 07:28:28.198180] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.088 [2024-11-27 07:28:28.209996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.088 [2024-11-27 07:28:28.210526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-11-27 07:28:28.210556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.088 [2024-11-27 07:28:28.210565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.088 [2024-11-27 07:28:28.210732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.088 [2024-11-27 07:28:28.210886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.088 [2024-11-27 07:28:28.210892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.088 [2024-11-27 07:28:28.210898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.088 [2024-11-27 07:28:28.210903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.088 [2024-11-27 07:28:28.222727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.088 [2024-11-27 07:28:28.223248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-11-27 07:28:28.223282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.088 [2024-11-27 07:28:28.223290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.088 [2024-11-27 07:28:28.223460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.088 [2024-11-27 07:28:28.223614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.088 [2024-11-27 07:28:28.223620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.088 [2024-11-27 07:28:28.223625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.088 [2024-11-27 07:28:28.223631] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.088 [2024-11-27 07:28:28.235450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.088 [2024-11-27 07:28:28.236080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-11-27 07:28:28.236111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.088 [2024-11-27 07:28:28.236120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.088 [2024-11-27 07:28:28.236294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.088 [2024-11-27 07:28:28.236450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.088 [2024-11-27 07:28:28.236456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.088 [2024-11-27 07:28:28.236462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.088 [2024-11-27 07:28:28.236469] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.088 [2024-11-27 07:28:28.248148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.088 [2024-11-27 07:28:28.248653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-11-27 07:28:28.248684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.088 [2024-11-27 07:28:28.248693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.088 [2024-11-27 07:28:28.248860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.088 [2024-11-27 07:28:28.249015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.088 [2024-11-27 07:28:28.249022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.088 [2024-11-27 07:28:28.249027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.088 [2024-11-27 07:28:28.249033] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.088 [2024-11-27 07:28:28.260866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.088 [2024-11-27 07:28:28.261381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-11-27 07:28:28.261396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.088 [2024-11-27 07:28:28.261402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.088 [2024-11-27 07:28:28.261561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.088 [2024-11-27 07:28:28.261713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.088 [2024-11-27 07:28:28.261718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.088 [2024-11-27 07:28:28.261723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.088 [2024-11-27 07:28:28.261728] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.088 [2024-11-27 07:28:28.273545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.088 [2024-11-27 07:28:28.274148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-11-27 07:28:28.274184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.088 [2024-11-27 07:28:28.274192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.088 [2024-11-27 07:28:28.274359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.088 [2024-11-27 07:28:28.274513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.088 [2024-11-27 07:28:28.274519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.088 [2024-11-27 07:28:28.274525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.088 [2024-11-27 07:28:28.274530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.088 [2024-11-27 07:28:28.286211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.088 [2024-11-27 07:28:28.286785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-11-27 07:28:28.286815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.088 [2024-11-27 07:28:28.286824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.088 [2024-11-27 07:28:28.286991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.088 [2024-11-27 07:28:28.287145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.088 [2024-11-27 07:28:28.287151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.088 [2024-11-27 07:28:28.287156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.088 [2024-11-27 07:28:28.287169] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.351 [2024-11-27 07:28:28.298853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.351 [2024-11-27 07:28:28.299439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.351 [2024-11-27 07:28:28.299469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.351 [2024-11-27 07:28:28.299478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.351 [2024-11-27 07:28:28.299648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.351 [2024-11-27 07:28:28.299803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.351 [2024-11-27 07:28:28.299809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.351 [2024-11-27 07:28:28.299819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.351 [2024-11-27 07:28:28.299825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.351 [2024-11-27 07:28:28.311519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.351 [2024-11-27 07:28:28.312103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.351 [2024-11-27 07:28:28.312132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.351 [2024-11-27 07:28:28.312141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.351 [2024-11-27 07:28:28.312316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.351 [2024-11-27 07:28:28.312470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.351 [2024-11-27 07:28:28.312477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.351 [2024-11-27 07:28:28.312482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.351 [2024-11-27 07:28:28.312488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.351 [2024-11-27 07:28:28.324179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.351 [2024-11-27 07:28:28.324693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.351 [2024-11-27 07:28:28.324724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.351 [2024-11-27 07:28:28.324733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.351 [2024-11-27 07:28:28.324901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.351 [2024-11-27 07:28:28.325055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.351 [2024-11-27 07:28:28.325061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.351 [2024-11-27 07:28:28.325067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.351 [2024-11-27 07:28:28.325072] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.351 [2024-11-27 07:28:28.336898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.351 [2024-11-27 07:28:28.337282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.351 [2024-11-27 07:28:28.337312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.351 [2024-11-27 07:28:28.337321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.351 [2024-11-27 07:28:28.337490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.351 [2024-11-27 07:28:28.337644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.351 [2024-11-27 07:28:28.337650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.351 [2024-11-27 07:28:28.337655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.351 [2024-11-27 07:28:28.337661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.351 [2024-11-27 07:28:28.349645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.351 [2024-11-27 07:28:28.350018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.351 [2024-11-27 07:28:28.350034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.351 [2024-11-27 07:28:28.350039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.352 [2024-11-27 07:28:28.350197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.352 [2024-11-27 07:28:28.350350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.352 [2024-11-27 07:28:28.350355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.352 [2024-11-27 07:28:28.350360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.352 [2024-11-27 07:28:28.350365] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.352 [2024-11-27 07:28:28.362311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.352 [2024-11-27 07:28:28.362771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.352 [2024-11-27 07:28:28.362784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.352 [2024-11-27 07:28:28.362789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.352 [2024-11-27 07:28:28.362940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.352 [2024-11-27 07:28:28.363092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.352 [2024-11-27 07:28:28.363097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.352 [2024-11-27 07:28:28.363102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.352 [2024-11-27 07:28:28.363107] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.352 [2024-11-27 07:28:28.375055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.352 [2024-11-27 07:28:28.375605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.352 [2024-11-27 07:28:28.375635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.352 [2024-11-27 07:28:28.375644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.352 [2024-11-27 07:28:28.375811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.352 [2024-11-27 07:28:28.375966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.352 [2024-11-27 07:28:28.375972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.352 [2024-11-27 07:28:28.375978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.352 [2024-11-27 07:28:28.375983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.352 [2024-11-27 07:28:28.387698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.352 [2024-11-27 07:28:28.388371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.352 [2024-11-27 07:28:28.388405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.352 [2024-11-27 07:28:28.388414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.352 [2024-11-27 07:28:28.388581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.352 [2024-11-27 07:28:28.388735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.352 [2024-11-27 07:28:28.388741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.352 [2024-11-27 07:28:28.388746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.352 [2024-11-27 07:28:28.388752] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.352 [2024-11-27 07:28:28.400449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.352 [2024-11-27 07:28:28.400948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.352 [2024-11-27 07:28:28.400963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.352 [2024-11-27 07:28:28.400968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.352 [2024-11-27 07:28:28.401120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.352 [2024-11-27 07:28:28.401277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.352 [2024-11-27 07:28:28.401283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.352 [2024-11-27 07:28:28.401288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.352 [2024-11-27 07:28:28.401293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.352 [2024-11-27 07:28:28.413110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.352 [2024-11-27 07:28:28.413571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.352 [2024-11-27 07:28:28.413585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.352 [2024-11-27 07:28:28.413591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.352 [2024-11-27 07:28:28.413742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.352 [2024-11-27 07:28:28.413893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.352 [2024-11-27 07:28:28.413898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.352 [2024-11-27 07:28:28.413904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.352 [2024-11-27 07:28:28.413909] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.352 [2024-11-27 07:28:28.425736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.352 [2024-11-27 07:28:28.426396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.352 [2024-11-27 07:28:28.426427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.352 [2024-11-27 07:28:28.426436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.352 [2024-11-27 07:28:28.426607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.352 [2024-11-27 07:28:28.426762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.352 [2024-11-27 07:28:28.426768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.352 [2024-11-27 07:28:28.426773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.352 [2024-11-27 07:28:28.426779] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.352 [2024-11-27 07:28:28.438473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.352 [2024-11-27 07:28:28.438981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.352 [2024-11-27 07:28:28.438995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.352 [2024-11-27 07:28:28.439001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.352 [2024-11-27 07:28:28.439153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.352 [2024-11-27 07:28:28.439310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.352 [2024-11-27 07:28:28.439317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.352 [2024-11-27 07:28:28.439322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.352 [2024-11-27 07:28:28.439327] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.352 [2024-11-27 07:28:28.451156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.352 [2024-11-27 07:28:28.451619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.352 [2024-11-27 07:28:28.451633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.352 [2024-11-27 07:28:28.451639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.352 [2024-11-27 07:28:28.451789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.352 [2024-11-27 07:28:28.451941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.352 [2024-11-27 07:28:28.451947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.352 [2024-11-27 07:28:28.451952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.352 [2024-11-27 07:28:28.451956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.352 [2024-11-27 07:28:28.463782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.352 [2024-11-27 07:28:28.464147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.352 [2024-11-27 07:28:28.464165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.352 [2024-11-27 07:28:28.464172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.352 [2024-11-27 07:28:28.464323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.352 [2024-11-27 07:28:28.464475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.352 [2024-11-27 07:28:28.464481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.352 [2024-11-27 07:28:28.464489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.352 [2024-11-27 07:28:28.464494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.352 [2024-11-27 07:28:28.476466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.352 [2024-11-27 07:28:28.476924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.352 [2024-11-27 07:28:28.476938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.353 [2024-11-27 07:28:28.476943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.353 [2024-11-27 07:28:28.477094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.353 [2024-11-27 07:28:28.477251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.353 [2024-11-27 07:28:28.477257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.353 [2024-11-27 07:28:28.477262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.353 [2024-11-27 07:28:28.477267] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.353 [2024-11-27 07:28:28.489086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.353 [2024-11-27 07:28:28.489525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.353 [2024-11-27 07:28:28.489538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.353 [2024-11-27 07:28:28.489543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.353 [2024-11-27 07:28:28.489694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.353 [2024-11-27 07:28:28.489845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.353 [2024-11-27 07:28:28.489852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.353 [2024-11-27 07:28:28.489857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.353 [2024-11-27 07:28:28.489862] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.353 [2024-11-27 07:28:28.501830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.353 [2024-11-27 07:28:28.502167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.353 [2024-11-27 07:28:28.502181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.353 [2024-11-27 07:28:28.502187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.353 [2024-11-27 07:28:28.502338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.353 [2024-11-27 07:28:28.502489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.353 [2024-11-27 07:28:28.502495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.353 [2024-11-27 07:28:28.502500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.353 [2024-11-27 07:28:28.502505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.353 [2024-11-27 07:28:28.514480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.353 [2024-11-27 07:28:28.515048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.353 [2024-11-27 07:28:28.515078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.353 [2024-11-27 07:28:28.515087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.353 [2024-11-27 07:28:28.515261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.353 [2024-11-27 07:28:28.515417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.353 [2024-11-27 07:28:28.515423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.353 [2024-11-27 07:28:28.515428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.353 [2024-11-27 07:28:28.515434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.353 [2024-11-27 07:28:28.527131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.353 [2024-11-27 07:28:28.527715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.353 [2024-11-27 07:28:28.527746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.353 [2024-11-27 07:28:28.527755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.353 [2024-11-27 07:28:28.527924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.353 [2024-11-27 07:28:28.528079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.353 [2024-11-27 07:28:28.528085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.353 [2024-11-27 07:28:28.528090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.353 [2024-11-27 07:28:28.528096] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.353 [2024-11-27 07:28:28.539784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.353 07:28:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:17.353 [2024-11-27 07:28:28.540297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.353 [2024-11-27 07:28:28.540327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.353 [2024-11-27 07:28:28.540336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.353 [2024-11-27 07:28:28.540505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.353 07:28:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:33:17.353 [2024-11-27 07:28:28.540660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.353 [2024-11-27 07:28:28.540667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.353 [2024-11-27 07:28:28.540672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.353 [2024-11-27 07:28:28.540678] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.353 07:28:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:17.353 07:28:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:17.353 07:28:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:17.353 [2024-11-27 07:28:28.552532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.353 [2024-11-27 07:28:28.553001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.353 [2024-11-27 07:28:28.553016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.353 [2024-11-27 07:28:28.553022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.353 [2024-11-27 07:28:28.553181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.353 [2024-11-27 07:28:28.553333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.353 [2024-11-27 07:28:28.553339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.353 [2024-11-27 07:28:28.553345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.353 [2024-11-27 07:28:28.553351] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.615 [2024-11-27 07:28:28.565165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.615 [2024-11-27 07:28:28.565703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.615 [2024-11-27 07:28:28.565732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.615 [2024-11-27 07:28:28.565741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.615 [2024-11-27 07:28:28.565910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.615 [2024-11-27 07:28:28.566064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.615 [2024-11-27 07:28:28.566070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.615 [2024-11-27 07:28:28.566076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.615 [2024-11-27 07:28:28.566082] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.615 [2024-11-27 07:28:28.577915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.615 [2024-11-27 07:28:28.578399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.615 [2024-11-27 07:28:28.578414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.615 [2024-11-27 07:28:28.578420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.615 [2024-11-27 07:28:28.578573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.615 [2024-11-27 07:28:28.578725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.615 [2024-11-27 07:28:28.578731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.615 [2024-11-27 07:28:28.578738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.615 [2024-11-27 07:28:28.578743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.615 07:28:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:17.615 07:28:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:17.615 07:28:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.615 07:28:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:17.615 [2024-11-27 07:28:28.587134] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:17.615 [2024-11-27 07:28:28.590579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.615 [2024-11-27 07:28:28.591048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.615 [2024-11-27 07:28:28.591062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.615 [2024-11-27 07:28:28.591068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.615 [2024-11-27 07:28:28.591223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.615 [2024-11-27 07:28:28.591376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.615 [2024-11-27 07:28:28.591381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.615 [2024-11-27 07:28:28.591386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.615 [2024-11-27 07:28:28.591391] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.615 07:28:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.615 07:28:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:17.615 07:28:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.615 07:28:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:17.615 [2024-11-27 07:28:28.603226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.615 [2024-11-27 07:28:28.603694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.615 [2024-11-27 07:28:28.603725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.615 [2024-11-27 07:28:28.603734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.615 [2024-11-27 07:28:28.603901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.615 [2024-11-27 07:28:28.604056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.615 [2024-11-27 07:28:28.604063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.615 [2024-11-27 07:28:28.604068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.615 [2024-11-27 07:28:28.604075] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.615 [2024-11-27 07:28:28.615896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.615 [2024-11-27 07:28:28.616296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.615 [2024-11-27 07:28:28.616326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.615 [2024-11-27 07:28:28.616334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.615 [2024-11-27 07:28:28.616502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.615 [2024-11-27 07:28:28.616657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.616 [2024-11-27 07:28:28.616667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.616 [2024-11-27 07:28:28.616673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.616 [2024-11-27 07:28:28.616678] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.616 Malloc0 00:33:17.616 07:28:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.616 07:28:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:17.616 07:28:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.616 07:28:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:17.616 [2024-11-27 07:28:28.628661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.616 [2024-11-27 07:28:28.629074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.616 [2024-11-27 07:28:28.629089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.616 [2024-11-27 07:28:28.629095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.616 [2024-11-27 07:28:28.629252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.616 [2024-11-27 07:28:28.629404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.616 [2024-11-27 07:28:28.629410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.616 [2024-11-27 07:28:28.629415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.616 [2024-11-27 07:28:28.629420] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.616 07:28:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.616 07:28:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:17.616 07:28:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.616 07:28:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:17.616 [2024-11-27 07:28:28.641301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.616 [2024-11-27 07:28:28.641776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.616 [2024-11-27 07:28:28.641805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218c010 with addr=10.0.0.2, port=4420 00:33:17.616 [2024-11-27 07:28:28.641814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c010 is same with the state(6) to be set 00:33:17.616 [2024-11-27 07:28:28.641982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c010 (9): Bad file descriptor 00:33:17.616 [2024-11-27 07:28:28.642136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:17.616 [2024-11-27 07:28:28.642142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:17.616 [2024-11-27 07:28:28.642148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:17.616 [2024-11-27 07:28:28.642154] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:17.616 07:28:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.616 07:28:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:17.616 07:28:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.616 07:28:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:17.616 [2024-11-27 07:28:28.653930] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:17.616 [2024-11-27 07:28:28.654001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:17.616 07:28:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.616 07:28:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2578588 00:33:17.616 [2024-11-27 07:28:28.682994] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:33:19.143 4873.43 IOPS, 19.04 MiB/s [2024-11-27T06:28:31.292Z] 5871.25 IOPS, 22.93 MiB/s [2024-11-27T06:28:32.234Z] 6678.44 IOPS, 26.09 MiB/s [2024-11-27T06:28:33.177Z] 7314.50 IOPS, 28.57 MiB/s [2024-11-27T06:28:34.138Z] 7828.91 IOPS, 30.58 MiB/s [2024-11-27T06:28:35.524Z] 8250.92 IOPS, 32.23 MiB/s [2024-11-27T06:28:36.467Z] 8616.77 IOPS, 33.66 MiB/s [2024-11-27T06:28:37.410Z] 8938.86 IOPS, 34.92 MiB/s 00:33:26.205 Latency(us) 00:33:26.205 [2024-11-27T06:28:37.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:26.205 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:26.205 Verification LBA range: start 0x0 length 0x4000 00:33:26.205 Nvme1n1 : 15.01 9202.95 35.95 12744.80 0.00 5812.74 559.79 24576.00 00:33:26.205 [2024-11-27T06:28:37.411Z] =================================================================================================================== 00:33:26.206 [2024-11-27T06:28:37.411Z] Total : 9202.95 35.95 12744.80 0.00 5812.74 559.79 24576.00 00:33:26.206 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:33:26.206 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:26.206 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.206 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:26.206 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.206 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:33:26.206 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:33:26.206 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:26.206 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:33:26.206 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:26.206 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:33:26.206 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:26.206 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:26.206 rmmod nvme_tcp 00:33:26.206 rmmod nvme_fabrics 00:33:26.206 rmmod nvme_keyring 00:33:26.206 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:26.206 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:33:26.206 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:33:26.206 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2579799 ']' 00:33:26.206 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2579799 00:33:26.206 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2579799 ']' 00:33:26.206 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2579799 00:33:26.206 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:33:26.206 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:26.206 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2579799 00:33:26.206 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:26.206 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:26.206 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2579799' 00:33:26.206 killing process with pid 2579799 00:33:26.206 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2579799 00:33:26.206 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2579799 00:33:26.466 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:26.466 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:26.466 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:26.466 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:33:26.466 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:33:26.466 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:26.466 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:33:26.466 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:26.466 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:26.466 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:26.466 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:26.466 07:28:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:28.381 07:28:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:28.381 00:33:28.381 real 0m28.269s 00:33:28.381 user 1m3.610s 00:33:28.381 sys 0m7.696s 00:33:28.381 07:28:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:28.381 07:28:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:28.381 ************************************ 00:33:28.381 END TEST nvmf_bdevperf 00:33:28.381 ************************************ 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.643 ************************************ 00:33:28.643 START TEST nvmf_target_disconnect 00:33:28.643 ************************************ 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:28.643 * Looking for test storage... 00:33:28.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:28.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.643 --rc genhtml_branch_coverage=1 00:33:28.643 --rc genhtml_function_coverage=1 00:33:28.643 --rc genhtml_legend=1 00:33:28.643 --rc geninfo_all_blocks=1 00:33:28.643 --rc geninfo_unexecuted_blocks=1 00:33:28.643 00:33:28.643 ' 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:28.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.643 --rc genhtml_branch_coverage=1 00:33:28.643 --rc genhtml_function_coverage=1 00:33:28.643 --rc genhtml_legend=1 00:33:28.643 --rc geninfo_all_blocks=1 00:33:28.643 --rc geninfo_unexecuted_blocks=1 00:33:28.643 00:33:28.643 ' 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:28.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.643 --rc genhtml_branch_coverage=1 00:33:28.643 --rc genhtml_function_coverage=1 00:33:28.643 --rc genhtml_legend=1 00:33:28.643 --rc geninfo_all_blocks=1 00:33:28.643 --rc geninfo_unexecuted_blocks=1 00:33:28.643 00:33:28.643 ' 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:28.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.643 --rc genhtml_branch_coverage=1 00:33:28.643 --rc genhtml_function_coverage=1 00:33:28.643 --rc genhtml_legend=1 00:33:28.643 --rc geninfo_all_blocks=1 00:33:28.643 --rc geninfo_unexecuted_blocks=1 00:33:28.643 00:33:28.643 ' 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:28.643 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:33:28.906 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:28.906 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:28.906 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:28.906 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:28.906 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:28.906 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:28.906 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:28.906 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:28.906 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:28.906 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:28.906 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:28.906 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:28.906 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:28.906 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:28.906 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:28.906 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:28.906 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:28.906 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:33:28.906 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:28.906 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:28.906 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:28.906 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.906 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.906 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.906 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:33:28.907 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.907 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:33:28.907 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:28.907 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:28.907 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:28.907 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:28.907 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:28.907 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:28.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:28.907 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:28.907 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:28.907 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:28.907 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:28.907 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:33:28.907 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:33:28.907 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:33:28.907 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:28.907 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:28.907 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:28.907 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:28.907 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:28.907 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:28.907 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:28.907 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:28.907 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:28.907 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:28.907 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:33:28.907 07:28:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:37.070 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:37.070 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:37.070 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:37.070 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:37.071 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:37.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:37.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:33:37.071 00:33:37.071 --- 10.0.0.2 ping statistics --- 00:33:37.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:37.071 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:37.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:37.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:33:37.071 00:33:37.071 --- 10.0.0.1 ping statistics --- 00:33:37.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:37.071 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:37.071 ************************************ 00:33:37.071 START TEST nvmf_target_disconnect_tc1 00:33:37.071 ************************************ 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:37.071 [2024-11-27 07:28:47.564980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.071 [2024-11-27 07:28:47.565088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95cae0 with addr=10.0.0.2, port=4420 00:33:37.071 [2024-11-27 07:28:47.565130] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:37.071 [2024-11-27 07:28:47.565143] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:37.071 [2024-11-27 07:28:47.565152] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:33:37.071 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:33:37.071 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:33:37.071 Initializing NVMe Controllers 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:37.071 00:33:37.071 real 0m0.146s 00:33:37.071 user 0m0.069s 00:33:37.071 sys 0m0.075s 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:33:37.071 ************************************ 00:33:37.071 END TEST nvmf_target_disconnect_tc1 00:33:37.071 ************************************ 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:37.071 ************************************ 00:33:37.071 START TEST nvmf_target_disconnect_tc2 00:33:37.071 ************************************ 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:37.071 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:37.072 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:37.072 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2585844 00:33:37.072 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2585844 00:33:37.072 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:37.072 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2585844 ']' 00:33:37.072 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:37.072 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:37.072 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:37.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:37.072 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:37.072 07:28:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:37.072 [2024-11-27 07:28:47.733837] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:33:37.072 [2024-11-27 07:28:47.733894] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:37.072 [2024-11-27 07:28:47.832812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:37.072 [2024-11-27 07:28:47.885103] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:37.072 [2024-11-27 07:28:47.885154] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:37.072 [2024-11-27 07:28:47.885169] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:37.072 [2024-11-27 07:28:47.885177] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:37.072 [2024-11-27 07:28:47.885184] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:37.072 [2024-11-27 07:28:47.887540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:37.072 [2024-11-27 07:28:47.887700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:37.072 [2024-11-27 07:28:47.887868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:37.072 [2024-11-27 07:28:47.887897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:37.644 07:28:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:37.644 07:28:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:33:37.644 07:28:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:37.644 07:28:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:37.644 07:28:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:37.644 07:28:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:37.644 07:28:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:37.644 07:28:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.644 07:28:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:37.644 Malloc0 00:33:37.644 07:28:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.644 07:28:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:37.644 07:28:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.644 07:28:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:37.644 [2024-11-27 07:28:48.646476] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:37.644 07:28:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.644 07:28:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:37.644 07:28:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.644 07:28:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:37.644 07:28:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.644 07:28:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:37.644 07:28:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.644 07:28:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:37.644 07:28:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.644 07:28:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:37.644 07:28:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.644 07:28:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:37.644 [2024-11-27 07:28:48.686920] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:37.644 07:28:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.644 07:28:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:37.644 07:28:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.644 07:28:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:37.644 07:28:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.644 07:28:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2586144 00:33:37.644 07:28:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:33:37.645 07:28:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:39.577 07:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2585844 00:33:39.577 07:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:33:39.577 Read completed with error (sct=0, sc=8) 00:33:39.577 starting I/O failed 00:33:39.577 Read completed with error (sct=0, sc=8) 00:33:39.577 starting I/O failed 00:33:39.577 Read completed with error (sct=0, sc=8) 00:33:39.577 starting I/O failed 00:33:39.577 Read completed with error (sct=0, sc=8) 00:33:39.577 starting I/O failed 00:33:39.577 Read completed with error (sct=0, sc=8) 00:33:39.577 starting I/O failed 00:33:39.577 Read completed with error (sct=0, sc=8) 00:33:39.577 starting I/O failed 00:33:39.577 Read completed with error (sct=0, sc=8) 00:33:39.577 starting I/O failed 00:33:39.577 Read completed with error (sct=0, sc=8) 00:33:39.577 starting I/O failed 00:33:39.577 Read completed with error (sct=0, sc=8) 00:33:39.577 starting I/O failed 00:33:39.577 Read completed with error (sct=0, sc=8) 00:33:39.577 starting I/O failed 00:33:39.577 Write completed with error (sct=0, sc=8) 00:33:39.577 starting I/O failed 00:33:39.577 Write completed with error (sct=0, sc=8) 00:33:39.577 starting I/O failed 00:33:39.577 Write completed with error (sct=0, sc=8) 00:33:39.577 starting I/O failed 00:33:39.577 Write completed with error (sct=0, sc=8) 00:33:39.577 starting I/O failed 00:33:39.577 Write completed with error (sct=0, sc=8) 00:33:39.577 starting I/O failed 00:33:39.577 Write completed with error (sct=0, sc=8) 00:33:39.577 starting I/O failed 00:33:39.577 Write completed with error (sct=0, sc=8) 00:33:39.577 starting I/O failed 00:33:39.577 Read completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Write completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Read completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Read completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Read completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Read completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Write completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Read completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Read completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Write completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Write completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Read completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Read completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Read completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Write completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 [2024-11-27 07:28:50.725778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:39.578 Read completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Read completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Read completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Read completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Read completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Read completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Read completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Read completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Read completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Read completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Read completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Read completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Read completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Write completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Write completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Write completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Read completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Write completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Read completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Write completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Read completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Read completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Read completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Read completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Write completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Write completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Write completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Write completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Write completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Write completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Read completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 Write completed with error (sct=0, sc=8) 00:33:39.578 starting I/O failed 00:33:39.578 [2024-11-27 07:28:50.726191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:39.578 [2024-11-27 07:28:50.726727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.578 [2024-11-27 07:28:50.726786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.578 qpair failed and we were unable to recover it. 00:33:39.578 [2024-11-27 07:28:50.727180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.578 [2024-11-27 07:28:50.727197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.578 qpair failed and we were unable to recover it. 00:33:39.578 [2024-11-27 07:28:50.727686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.578 [2024-11-27 07:28:50.727751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.578 qpair failed and we were unable to recover it. 00:33:39.578 [2024-11-27 07:28:50.728133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.578 [2024-11-27 07:28:50.728148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.578 qpair failed and we were unable to recover it. 00:33:39.578 [2024-11-27 07:28:50.728429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.578 [2024-11-27 07:28:50.728495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.578 qpair failed and we were unable to recover it. 00:33:39.578 [2024-11-27 07:28:50.728852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.578 [2024-11-27 07:28:50.728865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.578 qpair failed and we were unable to recover it. 00:33:39.578 [2024-11-27 07:28:50.729370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.578 [2024-11-27 07:28:50.729434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.578 qpair failed and we were unable to recover it. 00:33:39.578 [2024-11-27 07:28:50.729810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.578 [2024-11-27 07:28:50.729824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.578 qpair failed and we were unable to recover it. 00:33:39.578 [2024-11-27 07:28:50.730208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.578 [2024-11-27 07:28:50.730245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.578 qpair failed and we were unable to recover it. 00:33:39.578 [2024-11-27 07:28:50.730658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.578 [2024-11-27 07:28:50.730670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.578 qpair failed and we were unable to recover it. 00:33:39.578 [2024-11-27 07:28:50.731009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.578 [2024-11-27 07:28:50.731020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.578 qpair failed and we were unable to recover it. 00:33:39.578 [2024-11-27 07:28:50.731359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.578 [2024-11-27 07:28:50.731372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.578 qpair failed and we were unable to recover it. 00:33:39.578 [2024-11-27 07:28:50.731626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.578 [2024-11-27 07:28:50.731638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.578 qpair failed and we were unable to recover it. 00:33:39.578 [2024-11-27 07:28:50.732001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.578 [2024-11-27 07:28:50.732014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.578 qpair failed and we were unable to recover it. 00:33:39.578 [2024-11-27 07:28:50.732258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.578 [2024-11-27 07:28:50.732270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.578 qpair failed and we were unable to recover it. 00:33:39.578 [2024-11-27 07:28:50.732605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.578 [2024-11-27 07:28:50.732617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.578 qpair failed and we were unable to recover it. 00:33:39.578 [2024-11-27 07:28:50.732934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.578 [2024-11-27 07:28:50.732945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.578 qpair failed and we were unable to recover it. 00:33:39.579 [2024-11-27 07:28:50.733304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.579 [2024-11-27 07:28:50.733316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.579 qpair failed and we were unable to recover it. 00:33:39.579 [2024-11-27 07:28:50.733627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.579 [2024-11-27 07:28:50.733638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.579 qpair failed and we were unable to recover it. 00:33:39.579 [2024-11-27 07:28:50.733987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.579 [2024-11-27 07:28:50.733998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.579 qpair failed and we were unable to recover it. 00:33:39.579 [2024-11-27 07:28:50.734210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.579 [2024-11-27 07:28:50.734222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.579 qpair failed and we were unable to recover it. 00:33:39.579 [2024-11-27 07:28:50.734581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.579 [2024-11-27 07:28:50.734593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.579 qpair failed and we were unable to recover it. 00:33:39.579 [2024-11-27 07:28:50.734859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.579 [2024-11-27 07:28:50.734879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.579 qpair failed and we were unable to recover it. 00:33:39.579 [2024-11-27 07:28:50.735179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.579 [2024-11-27 07:28:50.735191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.579 qpair failed and we were unable to recover it. 00:33:39.579 [2024-11-27 07:28:50.735420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.579 [2024-11-27 07:28:50.735432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.579 qpair failed and we were unable to recover it. 00:33:39.579 [2024-11-27 07:28:50.736029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.579 [2024-11-27 07:28:50.736040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.579 qpair failed and we were unable to recover it. 00:33:39.579 [2024-11-27 07:28:50.736351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.579 [2024-11-27 07:28:50.736363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.579 qpair failed and we were unable to recover it. 00:33:39.579 [2024-11-27 07:28:50.736664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.579 [2024-11-27 07:28:50.736676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.579 qpair failed and we were unable to recover it. 00:33:39.579 [2024-11-27 07:28:50.736868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.579 [2024-11-27 07:28:50.736879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.579 qpair failed and we were unable to recover it. 00:33:39.579 [2024-11-27 07:28:50.737232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.579 [2024-11-27 07:28:50.737244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.579 qpair failed and we were unable to recover it. 00:33:39.579 [2024-11-27 07:28:50.737567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.579 [2024-11-27 07:28:50.737579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.579 qpair failed and we were unable to recover it. 00:33:39.579 [2024-11-27 07:28:50.737919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.579 [2024-11-27 07:28:50.737930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.579 qpair failed and we were unable to recover it. 00:33:39.579 [2024-11-27 07:28:50.738096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.579 [2024-11-27 07:28:50.738108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.579 qpair failed and we were unable to recover it. 00:33:39.579 [2024-11-27 07:28:50.738344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.579 [2024-11-27 07:28:50.738356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.579 qpair failed and we were unable to recover it. 00:33:39.579 [2024-11-27 07:28:50.738693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.579 [2024-11-27 07:28:50.738704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.579 qpair failed and we were unable to recover it. 00:33:39.579 [2024-11-27 07:28:50.738872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.579 [2024-11-27 07:28:50.738884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.579 qpair failed and we were unable to recover it. 00:33:39.579 [2024-11-27 07:28:50.739072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.579 [2024-11-27 07:28:50.739083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.579 qpair failed and we were unable to recover it. 00:33:39.579 [2024-11-27 07:28:50.739415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.579 [2024-11-27 07:28:50.739428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.579 qpair failed and we were unable to recover it. 00:33:39.579 [2024-11-27 07:28:50.739734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.579 [2024-11-27 07:28:50.739746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.579 qpair failed and we were unable to recover it. 00:33:39.579 [2024-11-27 07:28:50.740085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.579 [2024-11-27 07:28:50.740096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.579 qpair failed and we were unable to recover it. 00:33:39.579 [2024-11-27 07:28:50.740343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.579 [2024-11-27 07:28:50.740355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.579 qpair failed and we were unable to recover it. 00:33:39.579 [2024-11-27 07:28:50.740685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.579 [2024-11-27 07:28:50.740696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.579 qpair failed and we were unable to recover it. 00:33:39.579 [2024-11-27 07:28:50.741008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.579 [2024-11-27 07:28:50.741019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.579 qpair failed and we were unable to recover it. 00:33:39.579 [2024-11-27 07:28:50.741361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.579 [2024-11-27 07:28:50.741373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.579 qpair failed and we were unable to recover it. 00:33:39.579 [2024-11-27 07:28:50.741664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.579 [2024-11-27 07:28:50.741676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.579 qpair failed and we were unable to recover it. 00:33:39.579 [2024-11-27 07:28:50.742027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.579 [2024-11-27 07:28:50.742039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.579 qpair failed and we were unable to recover it. 00:33:39.579 [2024-11-27 07:28:50.742385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.579 [2024-11-27 07:28:50.742397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.579 qpair failed and we were unable to recover it. 00:33:39.579 [2024-11-27 07:28:50.742597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.579 [2024-11-27 07:28:50.742610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.579 qpair failed and we were unable to recover it. 00:33:39.579 [2024-11-27 07:28:50.742837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.579 [2024-11-27 07:28:50.742847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.579 qpair failed and we were unable to recover it. 00:33:39.579 [2024-11-27 07:28:50.743105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.579 [2024-11-27 07:28:50.743118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.579 qpair failed and we were unable to recover it. 00:33:39.579 [2024-11-27 07:28:50.743469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.579 [2024-11-27 07:28:50.743480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.579 qpair failed and we were unable to recover it. 00:33:39.579 [2024-11-27 07:28:50.743666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.579 [2024-11-27 07:28:50.743676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.580 qpair failed and we were unable to recover it. 00:33:39.580 [2024-11-27 07:28:50.744032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.580 [2024-11-27 07:28:50.744042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.580 qpair failed and we were unable to recover it. 00:33:39.580 [2024-11-27 07:28:50.744381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.580 [2024-11-27 07:28:50.744392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.580 qpair failed and we were unable to recover it. 00:33:39.580 [2024-11-27 07:28:50.744720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.580 [2024-11-27 07:28:50.744729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.580 qpair failed and we were unable to recover it. 00:33:39.580 [2024-11-27 07:28:50.744940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.580 [2024-11-27 07:28:50.744950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.580 qpair failed and we were unable to recover it. 00:33:39.580 [2024-11-27 07:28:50.745273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.580 [2024-11-27 07:28:50.745284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.580 qpair failed and we were unable to recover it. 00:33:39.580 [2024-11-27 07:28:50.745616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.580 [2024-11-27 07:28:50.745627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.580 qpair failed and we were unable to recover it. 00:33:39.580 [2024-11-27 07:28:50.745977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.580 [2024-11-27 07:28:50.745987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.580 qpair failed and we were unable to recover it. 00:33:39.580 [2024-11-27 07:28:50.746344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.580 [2024-11-27 07:28:50.746356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.580 qpair failed and we were unable to recover it. 00:33:39.580 [2024-11-27 07:28:50.746711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.580 [2024-11-27 07:28:50.746720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.580 qpair failed and we were unable to recover it. 00:33:39.580 [2024-11-27 07:28:50.747099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.580 [2024-11-27 07:28:50.747110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.580 qpair failed and we were unable to recover it. 00:33:39.580 [2024-11-27 07:28:50.747467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.580 [2024-11-27 07:28:50.747478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.580 qpair failed and we were unable to recover it. 00:33:39.580 [2024-11-27 07:28:50.747846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.580 [2024-11-27 07:28:50.747859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.580 qpair failed and we were unable to recover it. 00:33:39.580 [2024-11-27 07:28:50.748175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.580 [2024-11-27 07:28:50.748188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.580 qpair failed and we were unable to recover it. 00:33:39.580 [2024-11-27 07:28:50.748576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.580 [2024-11-27 07:28:50.748590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.580 qpair failed and we were unable to recover it. 00:33:39.580 [2024-11-27 07:28:50.748915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.580 [2024-11-27 07:28:50.748927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.580 qpair failed and we were unable to recover it. 00:33:39.580 [2024-11-27 07:28:50.749321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.580 [2024-11-27 07:28:50.749335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.580 qpair failed and we were unable to recover it. 00:33:39.580 [2024-11-27 07:28:50.749657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.580 [2024-11-27 07:28:50.749669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.580 qpair failed and we were unable to recover it. 00:33:39.580 [2024-11-27 07:28:50.750060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.580 [2024-11-27 07:28:50.750073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.580 qpair failed and we were unable to recover it. 00:33:39.580 [2024-11-27 07:28:50.750383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.580 [2024-11-27 07:28:50.750395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.580 qpair failed and we were unable to recover it. 00:33:39.580 [2024-11-27 07:28:50.750729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.580 [2024-11-27 07:28:50.750740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.580 qpair failed and we were unable to recover it. 00:33:39.580 [2024-11-27 07:28:50.751095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.580 [2024-11-27 07:28:50.751107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.580 qpair failed and we were unable to recover it. 00:33:39.580 [2024-11-27 07:28:50.751459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.580 [2024-11-27 07:28:50.751471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.580 qpair failed and we were unable to recover it. 00:33:39.580 [2024-11-27 07:28:50.751809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.580 [2024-11-27 07:28:50.751822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.580 qpair failed and we were unable to recover it. 00:33:39.580 [2024-11-27 07:28:50.752172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.580 [2024-11-27 07:28:50.752186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.580 qpair failed and we were unable to recover it. 00:33:39.580 [2024-11-27 07:28:50.752520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.580 [2024-11-27 07:28:50.752531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.580 qpair failed and we were unable to recover it. 00:33:39.580 [2024-11-27 07:28:50.752767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.580 [2024-11-27 07:28:50.752779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.580 qpair failed and we were unable to recover it. 00:33:39.580 [2024-11-27 07:28:50.753092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.580 [2024-11-27 07:28:50.753104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.580 qpair failed and we were unable to recover it. 00:33:39.580 [2024-11-27 07:28:50.753450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.580 [2024-11-27 07:28:50.753462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.580 qpair failed and we were unable to recover it. 00:33:39.580 [2024-11-27 07:28:50.753867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.580 [2024-11-27 07:28:50.753878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.580 qpair failed and we were unable to recover it. 00:33:39.580 [2024-11-27 07:28:50.754184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.580 [2024-11-27 07:28:50.754197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.580 qpair failed and we were unable to recover it. 00:33:39.580 [2024-11-27 07:28:50.754518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.580 [2024-11-27 07:28:50.754531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.580 qpair failed and we were unable to recover it. 00:33:39.580 [2024-11-27 07:28:50.754883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.580 [2024-11-27 07:28:50.754894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.580 qpair failed and we were unable to recover it. 00:33:39.580 [2024-11-27 07:28:50.755246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.580 [2024-11-27 07:28:50.755259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.580 qpair failed and we were unable to recover it. 00:33:39.580 [2024-11-27 07:28:50.755622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.580 [2024-11-27 07:28:50.755634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.580 qpair failed and we were unable to recover it. 00:33:39.580 [2024-11-27 07:28:50.755933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.580 [2024-11-27 07:28:50.755945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.581 qpair failed and we were unable to recover it. 00:33:39.581 [2024-11-27 07:28:50.756275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.581 [2024-11-27 07:28:50.756287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.581 qpair failed and we were unable to recover it. 00:33:39.581 [2024-11-27 07:28:50.756530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.581 [2024-11-27 07:28:50.756542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.581 qpair failed and we were unable to recover it. 00:33:39.581 [2024-11-27 07:28:50.756916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.581 [2024-11-27 07:28:50.756927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.581 qpair failed and we were unable to recover it. 00:33:39.581 [2024-11-27 07:28:50.757221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.581 [2024-11-27 07:28:50.757237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.581 qpair failed and we were unable to recover it. 00:33:39.581 [2024-11-27 07:28:50.757552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.581 [2024-11-27 07:28:50.757565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.581 qpair failed and we were unable to recover it. 00:33:39.581 [2024-11-27 07:28:50.757880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.581 [2024-11-27 07:28:50.757893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.581 qpair failed and we were unable to recover it. 00:33:39.581 [2024-11-27 07:28:50.758237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.581 [2024-11-27 07:28:50.758250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.581 qpair failed and we were unable to recover it. 00:33:39.581 [2024-11-27 07:28:50.758558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.581 [2024-11-27 07:28:50.758570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.581 qpair failed and we were unable to recover it. 00:33:39.581 [2024-11-27 07:28:50.758967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.581 [2024-11-27 07:28:50.758979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.581 qpair failed and we were unable to recover it. 00:33:39.581 [2024-11-27 07:28:50.759314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.581 [2024-11-27 07:28:50.759331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.581 qpair failed and we were unable to recover it. 00:33:39.581 [2024-11-27 07:28:50.759697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.581 [2024-11-27 07:28:50.759712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.581 qpair failed and we were unable to recover it. 00:33:39.581 [2024-11-27 07:28:50.760017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.581 [2024-11-27 07:28:50.760032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.581 qpair failed and we were unable to recover it. 00:33:39.581 [2024-11-27 07:28:50.760369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.581 [2024-11-27 07:28:50.760385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.581 qpair failed and we were unable to recover it. 00:33:39.581 [2024-11-27 07:28:50.760700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.581 [2024-11-27 07:28:50.760715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.581 qpair failed and we were unable to recover it. 00:33:39.581 [2024-11-27 07:28:50.761067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.581 [2024-11-27 07:28:50.761083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.581 qpair failed and we were unable to recover it. 00:33:39.581 [2024-11-27 07:28:50.761415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.581 [2024-11-27 07:28:50.761431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.581 qpair failed and we were unable to recover it. 00:33:39.581 [2024-11-27 07:28:50.761659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.581 [2024-11-27 07:28:50.761678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.581 qpair failed and we were unable to recover it. 00:33:39.581 [2024-11-27 07:28:50.761966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.581 [2024-11-27 07:28:50.761983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.581 qpair failed and we were unable to recover it. 00:33:39.581 [2024-11-27 07:28:50.762304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.581 [2024-11-27 07:28:50.762321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.581 qpair failed and we were unable to recover it. 00:33:39.581 [2024-11-27 07:28:50.762680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.581 [2024-11-27 07:28:50.762695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.581 qpair failed and we were unable to recover it. 00:33:39.581 [2024-11-27 07:28:50.762999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.581 [2024-11-27 07:28:50.763014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.581 qpair failed and we were unable to recover it. 00:33:39.581 [2024-11-27 07:28:50.763329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.581 [2024-11-27 07:28:50.763345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.581 qpair failed and we were unable to recover it. 00:33:39.581 [2024-11-27 07:28:50.763692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.581 [2024-11-27 07:28:50.763708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.581 qpair failed and we were unable to recover it. 00:33:39.581 [2024-11-27 07:28:50.764038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.581 [2024-11-27 07:28:50.764054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.581 qpair failed and we were unable to recover it. 00:33:39.581 [2024-11-27 07:28:50.764376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.581 [2024-11-27 07:28:50.764392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.581 qpair failed and we were unable to recover it. 00:33:39.581 [2024-11-27 07:28:50.764618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.581 [2024-11-27 07:28:50.764633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.581 qpair failed and we were unable to recover it. 00:33:39.581 [2024-11-27 07:28:50.764919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.581 [2024-11-27 07:28:50.764934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.581 qpair failed and we were unable to recover it. 00:33:39.581 [2024-11-27 07:28:50.765263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.581 [2024-11-27 07:28:50.765279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.581 qpair failed and we were unable to recover it. 00:33:39.581 [2024-11-27 07:28:50.765618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.581 [2024-11-27 07:28:50.765634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.581 qpair failed and we were unable to recover it. 00:33:39.581 [2024-11-27 07:28:50.766031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.581 [2024-11-27 07:28:50.766047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.581 qpair failed and we were unable to recover it. 00:33:39.581 [2024-11-27 07:28:50.766374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.581 [2024-11-27 07:28:50.766394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.581 qpair failed and we were unable to recover it. 00:33:39.581 [2024-11-27 07:28:50.766806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.581 [2024-11-27 07:28:50.766823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.581 qpair failed and we were unable to recover it. 00:33:39.581 [2024-11-27 07:28:50.767140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.581 [2024-11-27 07:28:50.767156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.581 qpair failed and we were unable to recover it. 00:33:39.581 [2024-11-27 07:28:50.767447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.581 [2024-11-27 07:28:50.767463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.581 qpair failed and we were unable to recover it. 00:33:39.581 [2024-11-27 07:28:50.767798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.581 [2024-11-27 07:28:50.767813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.581 qpair failed and we were unable to recover it. 00:33:39.581 [2024-11-27 07:28:50.768175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.581 [2024-11-27 07:28:50.768192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.582 qpair failed and we were unable to recover it. 00:33:39.582 [2024-11-27 07:28:50.768532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.582 [2024-11-27 07:28:50.768548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.582 qpair failed and we were unable to recover it. 00:33:39.582 [2024-11-27 07:28:50.768869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.582 [2024-11-27 07:28:50.768884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.582 qpair failed and we were unable to recover it. 00:33:39.582 [2024-11-27 07:28:50.769281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.582 [2024-11-27 07:28:50.769297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.582 qpair failed and we were unable to recover it. 00:33:39.582 [2024-11-27 07:28:50.769630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.582 [2024-11-27 07:28:50.769645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.582 qpair failed and we were unable to recover it. 00:33:39.582 [2024-11-27 07:28:50.769967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.582 [2024-11-27 07:28:50.769982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.582 qpair failed and we were unable to recover it. 00:33:39.582 [2024-11-27 07:28:50.770386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.582 [2024-11-27 07:28:50.770402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.582 qpair failed and we were unable to recover it. 00:33:39.582 [2024-11-27 07:28:50.770723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.582 [2024-11-27 07:28:50.770739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.582 qpair failed and we were unable to recover it. 00:33:39.582 [2024-11-27 07:28:50.771055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.582 [2024-11-27 07:28:50.771071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.582 qpair failed and we were unable to recover it. 00:33:39.582 [2024-11-27 07:28:50.771303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.582 [2024-11-27 07:28:50.771320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.582 qpair failed and we were unable to recover it. 00:33:39.582 [2024-11-27 07:28:50.771663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.582 [2024-11-27 07:28:50.771683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.582 qpair failed and we were unable to recover it. 00:33:39.582 [2024-11-27 07:28:50.772008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.582 [2024-11-27 07:28:50.772029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.582 qpair failed and we were unable to recover it. 00:33:39.582 [2024-11-27 07:28:50.772364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.582 [2024-11-27 07:28:50.772386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.582 qpair failed and we were unable to recover it. 00:33:39.582 [2024-11-27 07:28:50.772630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.582 [2024-11-27 07:28:50.772653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.582 qpair failed and we were unable to recover it. 00:33:39.582 [2024-11-27 07:28:50.772992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.582 [2024-11-27 07:28:50.773013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.582 qpair failed and we were unable to recover it. 00:33:39.582 [2024-11-27 07:28:50.773286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.582 [2024-11-27 07:28:50.773308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.582 qpair failed and we were unable to recover it. 00:33:39.582 [2024-11-27 07:28:50.773643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.582 [2024-11-27 07:28:50.773664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.582 qpair failed and we were unable to recover it. 00:33:39.582 [2024-11-27 07:28:50.773987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.582 [2024-11-27 07:28:50.774007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.582 qpair failed and we were unable to recover it. 00:33:39.582 [2024-11-27 07:28:50.774268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.582 [2024-11-27 07:28:50.774290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.582 qpair failed and we were unable to recover it. 00:33:39.582 [2024-11-27 07:28:50.774618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.582 [2024-11-27 07:28:50.774647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.582 qpair failed and we were unable to recover it. 00:33:39.582 [2024-11-27 07:28:50.775008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.582 [2024-11-27 07:28:50.775030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.582 qpair failed and we were unable to recover it. 00:33:39.582 [2024-11-27 07:28:50.775344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.582 [2024-11-27 07:28:50.775367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.582 qpair failed and we were unable to recover it. 00:33:39.582 [2024-11-27 07:28:50.775729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.582 [2024-11-27 07:28:50.775754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.582 qpair failed and we were unable to recover it. 00:33:39.582 [2024-11-27 07:28:50.776070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.582 [2024-11-27 07:28:50.776093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.582 qpair failed and we were unable to recover it. 00:33:39.582 [2024-11-27 07:28:50.776410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.582 [2024-11-27 07:28:50.776433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.582 qpair failed and we were unable to recover it. 00:33:39.582 [2024-11-27 07:28:50.776777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.582 [2024-11-27 07:28:50.776798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.582 qpair failed and we were unable to recover it. 00:33:39.582 [2024-11-27 07:28:50.777121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.582 [2024-11-27 07:28:50.777142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.582 qpair failed and we were unable to recover it. 00:33:39.582 [2024-11-27 07:28:50.777506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.582 [2024-11-27 07:28:50.777528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.582 qpair failed and we were unable to recover it. 00:33:39.582 [2024-11-27 07:28:50.777855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.582 [2024-11-27 07:28:50.777875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.583 qpair failed and we were unable to recover it. 00:33:39.583 [2024-11-27 07:28:50.778198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.583 [2024-11-27 07:28:50.778241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.583 qpair failed and we were unable to recover it. 00:33:39.583 [2024-11-27 07:28:50.778453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.854 [2024-11-27 07:28:50.778477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.854 qpair failed and we were unable to recover it. 00:33:39.854 [2024-11-27 07:28:50.778847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.854 [2024-11-27 07:28:50.778871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.854 qpair failed and we were unable to recover it. 00:33:39.854 [2024-11-27 07:28:50.779199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.854 [2024-11-27 07:28:50.779222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.854 qpair failed and we were unable to recover it. 00:33:39.854 [2024-11-27 07:28:50.779584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.854 [2024-11-27 07:28:50.779605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.854 qpair failed and we were unable to recover it. 00:33:39.854 [2024-11-27 07:28:50.779935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.854 [2024-11-27 07:28:50.779958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.855 qpair failed and we were unable to recover it. 00:33:39.855 [2024-11-27 07:28:50.780308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.855 [2024-11-27 07:28:50.780329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.855 qpair failed and we were unable to recover it. 00:33:39.855 [2024-11-27 07:28:50.780725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.855 [2024-11-27 07:28:50.780746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.855 qpair failed and we were unable to recover it. 00:33:39.855 [2024-11-27 07:28:50.781074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.855 [2024-11-27 07:28:50.781096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.855 qpair failed and we were unable to recover it. 00:33:39.855 [2024-11-27 07:28:50.781499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.855 [2024-11-27 07:28:50.781520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.855 qpair failed and we were unable to recover it. 00:33:39.855 [2024-11-27 07:28:50.781851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.855 [2024-11-27 07:28:50.781872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.855 qpair failed and we were unable to recover it. 00:33:39.855 [2024-11-27 07:28:50.782204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.855 [2024-11-27 07:28:50.782226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.855 qpair failed and we were unable to recover it. 00:33:39.855 [2024-11-27 07:28:50.782553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.855 [2024-11-27 07:28:50.782575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.855 qpair failed and we were unable to recover it. 00:33:39.855 [2024-11-27 07:28:50.782908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.855 [2024-11-27 07:28:50.782937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.855 qpair failed and we were unable to recover it. 00:33:39.855 [2024-11-27 07:28:50.783287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.855 [2024-11-27 07:28:50.783318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.855 qpair failed and we were unable to recover it. 00:33:39.855 [2024-11-27 07:28:50.783650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.855 [2024-11-27 07:28:50.783678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.855 qpair failed and we were unable to recover it. 00:33:39.855 [2024-11-27 07:28:50.784033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.855 [2024-11-27 07:28:50.784063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.855 qpair failed and we were unable to recover it. 00:33:39.855 [2024-11-27 07:28:50.784413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.855 [2024-11-27 07:28:50.784443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.855 qpair failed and we were unable to recover it. 00:33:39.855 [2024-11-27 07:28:50.784772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.855 [2024-11-27 07:28:50.784801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.855 qpair failed and we were unable to recover it. 00:33:39.855 [2024-11-27 07:28:50.785174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.855 [2024-11-27 07:28:50.785204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.855 qpair failed and we were unable to recover it. 00:33:39.855 [2024-11-27 07:28:50.785570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.855 [2024-11-27 07:28:50.785598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.855 qpair failed and we were unable to recover it. 00:33:39.855 [2024-11-27 07:28:50.785850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.855 [2024-11-27 07:28:50.785882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.855 qpair failed and we were unable to recover it. 00:33:39.855 [2024-11-27 07:28:50.786236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.855 [2024-11-27 07:28:50.786268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.855 qpair failed and we were unable to recover it. 00:33:39.855 [2024-11-27 07:28:50.786647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.855 [2024-11-27 07:28:50.786675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.855 qpair failed and we were unable to recover it. 00:33:39.855 [2024-11-27 07:28:50.787039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.855 [2024-11-27 07:28:50.787067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.855 qpair failed and we were unable to recover it. 00:33:39.855 [2024-11-27 07:28:50.787446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.855 [2024-11-27 07:28:50.787476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.855 qpair failed and we were unable to recover it. 00:33:39.855 [2024-11-27 07:28:50.787723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.855 [2024-11-27 07:28:50.787750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.855 qpair failed and we were unable to recover it. 00:33:39.855 [2024-11-27 07:28:50.788101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.855 [2024-11-27 07:28:50.788129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.855 qpair failed and we were unable to recover it. 00:33:39.855 [2024-11-27 07:28:50.788441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.855 [2024-11-27 07:28:50.788471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.855 qpair failed and we were unable to recover it. 00:33:39.855 [2024-11-27 07:28:50.788827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.855 [2024-11-27 07:28:50.788855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.855 qpair failed and we were unable to recover it. 00:33:39.855 [2024-11-27 07:28:50.789227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.855 [2024-11-27 07:28:50.789256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.855 qpair failed and we were unable to recover it. 00:33:39.855 [2024-11-27 07:28:50.789614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.855 [2024-11-27 07:28:50.789642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.855 qpair failed and we were unable to recover it. 00:33:39.855 [2024-11-27 07:28:50.789907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.855 [2024-11-27 07:28:50.789935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.855 qpair failed and we were unable to recover it. 00:33:39.855 [2024-11-27 07:28:50.790300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.855 [2024-11-27 07:28:50.790330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.855 qpair failed and we were unable to recover it. 00:33:39.855 [2024-11-27 07:28:50.790689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.855 [2024-11-27 07:28:50.790718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.855 qpair failed and we were unable to recover it. 00:33:39.855 [2024-11-27 07:28:50.790982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.855 [2024-11-27 07:28:50.791010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.855 qpair failed and we were unable to recover it. 00:33:39.855 [2024-11-27 07:28:50.791369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.855 [2024-11-27 07:28:50.791398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.855 qpair failed and we were unable to recover it. 00:33:39.855 [2024-11-27 07:28:50.791642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.855 [2024-11-27 07:28:50.791669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.855 qpair failed and we were unable to recover it. 00:33:39.855 [2024-11-27 07:28:50.792026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.855 [2024-11-27 07:28:50.792054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.855 qpair failed and we were unable to recover it. 00:33:39.856 [2024-11-27 07:28:50.792392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.856 [2024-11-27 07:28:50.792422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.856 qpair failed and we were unable to recover it. 00:33:39.856 [2024-11-27 07:28:50.792789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.856 [2024-11-27 07:28:50.792818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.856 qpair failed and we were unable to recover it. 00:33:39.856 [2024-11-27 07:28:50.793184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.856 [2024-11-27 07:28:50.793213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.856 qpair failed and we were unable to recover it. 00:33:39.856 [2024-11-27 07:28:50.793542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.856 [2024-11-27 07:28:50.793570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.856 qpair failed and we were unable to recover it. 00:33:39.856 [2024-11-27 07:28:50.793932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.856 [2024-11-27 07:28:50.793960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.856 qpair failed and we were unable to recover it. 00:33:39.856 [2024-11-27 07:28:50.794327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.856 [2024-11-27 07:28:50.794357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.856 qpair failed and we were unable to recover it. 00:33:39.856 [2024-11-27 07:28:50.794721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.856 [2024-11-27 07:28:50.794749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.856 qpair failed and we were unable to recover it. 00:33:39.856 [2024-11-27 07:28:50.795129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.856 [2024-11-27 07:28:50.795168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.856 qpair failed and we were unable to recover it. 00:33:39.856 [2024-11-27 07:28:50.795533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.856 [2024-11-27 07:28:50.795561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.856 qpair failed and we were unable to recover it. 00:33:39.856 [2024-11-27 07:28:50.795918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.856 [2024-11-27 07:28:50.795946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.856 qpair failed and we were unable to recover it. 00:33:39.856 [2024-11-27 07:28:50.796356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.856 [2024-11-27 07:28:50.796385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.856 qpair failed and we were unable to recover it. 00:33:39.856 [2024-11-27 07:28:50.796744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.856 [2024-11-27 07:28:50.796771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.856 qpair failed and we were unable to recover it. 00:33:39.856 [2024-11-27 07:28:50.797121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.856 [2024-11-27 07:28:50.797149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.856 qpair failed and we were unable to recover it. 00:33:39.856 [2024-11-27 07:28:50.797536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.856 [2024-11-27 07:28:50.797564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.856 qpair failed and we were unable to recover it. 00:33:39.856 [2024-11-27 07:28:50.797946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.856 [2024-11-27 07:28:50.797974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.856 qpair failed and we were unable to recover it. 00:33:39.856 [2024-11-27 07:28:50.798323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.856 [2024-11-27 07:28:50.798353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.856 qpair failed and we were unable to recover it. 00:33:39.856 [2024-11-27 07:28:50.798724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.856 [2024-11-27 07:28:50.798752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.856 qpair failed and we were unable to recover it. 00:33:39.856 [2024-11-27 07:28:50.799126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.856 [2024-11-27 07:28:50.799155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.856 qpair failed and we were unable to recover it. 00:33:39.856 [2024-11-27 07:28:50.799521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.856 [2024-11-27 07:28:50.799550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.856 qpair failed and we were unable to recover it. 00:33:39.856 [2024-11-27 07:28:50.799885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.856 [2024-11-27 07:28:50.799913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.856 qpair failed and we were unable to recover it. 00:33:39.856 [2024-11-27 07:28:50.800204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.856 [2024-11-27 07:28:50.800234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.856 qpair failed and we were unable to recover it. 00:33:39.856 [2024-11-27 07:28:50.800625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.856 [2024-11-27 07:28:50.800653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.856 qpair failed and we were unable to recover it. 00:33:39.856 [2024-11-27 07:28:50.800912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.856 [2024-11-27 07:28:50.800946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.856 qpair failed and we were unable to recover it. 00:33:39.856 [2024-11-27 07:28:50.801305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.856 [2024-11-27 07:28:50.801335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.856 qpair failed and we were unable to recover it. 00:33:39.856 [2024-11-27 07:28:50.801683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.856 [2024-11-27 07:28:50.801712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.856 qpair failed and we were unable to recover it. 00:33:39.856 [2024-11-27 07:28:50.802084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.856 [2024-11-27 07:28:50.802112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.856 qpair failed and we were unable to recover it. 00:33:39.856 [2024-11-27 07:28:50.802480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.856 [2024-11-27 07:28:50.802511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.856 qpair failed and we were unable to recover it. 00:33:39.856 [2024-11-27 07:28:50.802875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.856 [2024-11-27 07:28:50.802903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.856 qpair failed and we were unable to recover it. 00:33:39.856 [2024-11-27 07:28:50.803273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.856 [2024-11-27 07:28:50.803303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.856 qpair failed and we were unable to recover it. 00:33:39.856 [2024-11-27 07:28:50.803665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.856 [2024-11-27 07:28:50.803693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.856 qpair failed and we were unable to recover it. 00:33:39.856 [2024-11-27 07:28:50.804059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.856 [2024-11-27 07:28:50.804088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.856 qpair failed and we were unable to recover it. 00:33:39.856 [2024-11-27 07:28:50.804442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.856 [2024-11-27 07:28:50.804472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.856 qpair failed and we were unable to recover it. 00:33:39.856 [2024-11-27 07:28:50.804844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.856 [2024-11-27 07:28:50.804871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.856 qpair failed and we were unable to recover it. 00:33:39.856 [2024-11-27 07:28:50.805136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.856 [2024-11-27 07:28:50.805174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.856 qpair failed and we were unable to recover it. 00:33:39.856 [2024-11-27 07:28:50.805583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.856 [2024-11-27 07:28:50.805611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.856 qpair failed and we were unable to recover it. 00:33:39.856 [2024-11-27 07:28:50.805977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.856 [2024-11-27 07:28:50.806004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.856 qpair failed and we were unable to recover it. 00:33:39.856 [2024-11-27 07:28:50.806348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.856 [2024-11-27 07:28:50.806378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.856 qpair failed and we were unable to recover it. 00:33:39.856 [2024-11-27 07:28:50.806737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.857 [2024-11-27 07:28:50.806766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.857 qpair failed and we were unable to recover it. 00:33:39.857 [2024-11-27 07:28:50.807130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.857 [2024-11-27 07:28:50.807172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.857 qpair failed and we were unable to recover it. 00:33:39.857 [2024-11-27 07:28:50.807510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.857 [2024-11-27 07:28:50.807539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.857 qpair failed and we were unable to recover it. 00:33:39.857 [2024-11-27 07:28:50.807885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.857 [2024-11-27 07:28:50.807914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.857 qpair failed and we were unable to recover it. 00:33:39.857 [2024-11-27 07:28:50.808251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.857 [2024-11-27 07:28:50.808281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.857 qpair failed and we were unable to recover it. 00:33:39.857 [2024-11-27 07:28:50.808624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.857 [2024-11-27 07:28:50.808662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.857 qpair failed and we were unable to recover it. 00:33:39.857 [2024-11-27 07:28:50.809050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.857 [2024-11-27 07:28:50.809078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.857 qpair failed and we were unable to recover it. 00:33:39.857 [2024-11-27 07:28:50.809483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.857 [2024-11-27 07:28:50.809512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.857 qpair failed and we were unable to recover it. 00:33:39.857 [2024-11-27 07:28:50.809888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.857 [2024-11-27 07:28:50.809915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.857 qpair failed and we were unable to recover it. 00:33:39.857 [2024-11-27 07:28:50.810291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.857 [2024-11-27 07:28:50.810321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.857 qpair failed and we were unable to recover it. 00:33:39.857 [2024-11-27 07:28:50.810682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.857 [2024-11-27 07:28:50.810710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.857 qpair failed and we were unable to recover it. 00:33:39.857 [2024-11-27 07:28:50.810960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.857 [2024-11-27 07:28:50.810991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.857 qpair failed and we were unable to recover it. 00:33:39.857 [2024-11-27 07:28:50.811289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.857 [2024-11-27 07:28:50.811325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.857 qpair failed and we were unable to recover it. 00:33:39.857 [2024-11-27 07:28:50.811746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.857 [2024-11-27 07:28:50.811775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.857 qpair failed and we were unable to recover it. 00:33:39.857 [2024-11-27 07:28:50.812129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.857 [2024-11-27 07:28:50.812157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.857 qpair failed and we were unable to recover it. 00:33:39.857 [2024-11-27 07:28:50.812538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.857 [2024-11-27 07:28:50.812566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.857 qpair failed and we were unable to recover it. 00:33:39.857 [2024-11-27 07:28:50.812931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.857 [2024-11-27 07:28:50.812960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.857 qpair failed and we were unable to recover it. 00:33:39.857 [2024-11-27 07:28:50.813260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.857 [2024-11-27 07:28:50.813290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.857 qpair failed and we were unable to recover it. 00:33:39.857 [2024-11-27 07:28:50.813656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.857 [2024-11-27 07:28:50.813685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.857 qpair failed and we were unable to recover it. 00:33:39.857 [2024-11-27 07:28:50.814029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.857 [2024-11-27 07:28:50.814057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.857 qpair failed and we were unable to recover it. 00:33:39.857 [2024-11-27 07:28:50.814404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.857 [2024-11-27 07:28:50.814435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.857 qpair failed and we were unable to recover it. 00:33:39.857 [2024-11-27 07:28:50.814805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.857 [2024-11-27 07:28:50.814834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.857 qpair failed and we were unable to recover it. 00:33:39.857 [2024-11-27 07:28:50.815095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.857 [2024-11-27 07:28:50.815122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.857 qpair failed and we were unable to recover it. 00:33:39.857 [2024-11-27 07:28:50.815481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.857 [2024-11-27 07:28:50.815511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.857 qpair failed and we were unable to recover it. 00:33:39.857 [2024-11-27 07:28:50.815864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.857 [2024-11-27 07:28:50.815893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.857 qpair failed and we were unable to recover it. 00:33:39.857 [2024-11-27 07:28:50.816243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.857 [2024-11-27 07:28:50.816273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.857 qpair failed and we were unable to recover it. 00:33:39.857 [2024-11-27 07:28:50.816631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.857 [2024-11-27 07:28:50.816660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.857 qpair failed and we were unable to recover it. 00:33:39.857 [2024-11-27 07:28:50.817031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.857 [2024-11-27 07:28:50.817059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.857 qpair failed and we were unable to recover it. 00:33:39.857 [2024-11-27 07:28:50.817308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.857 [2024-11-27 07:28:50.817340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.857 qpair failed and we were unable to recover it. 00:33:39.857 [2024-11-27 07:28:50.817706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.857 [2024-11-27 07:28:50.817735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.857 qpair failed and we were unable to recover it. 00:33:39.857 [2024-11-27 07:28:50.818107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.857 [2024-11-27 07:28:50.818136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.857 qpair failed and we were unable to recover it. 00:33:39.857 [2024-11-27 07:28:50.818524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.857 [2024-11-27 07:28:50.818555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.857 qpair failed and we were unable to recover it. 00:33:39.857 [2024-11-27 07:28:50.818933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.857 [2024-11-27 07:28:50.818961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.857 qpair failed and we were unable to recover it. 00:33:39.857 [2024-11-27 07:28:50.819324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.857 [2024-11-27 07:28:50.819357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.857 qpair failed and we were unable to recover it. 00:33:39.857 [2024-11-27 07:28:50.819710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.857 [2024-11-27 07:28:50.819738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.857 qpair failed and we were unable to recover it. 00:33:39.857 [2024-11-27 07:28:50.820099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.857 [2024-11-27 07:28:50.820127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.857 qpair failed and we were unable to recover it. 00:33:39.857 [2024-11-27 07:28:50.820497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.857 [2024-11-27 07:28:50.820526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.857 qpair failed and we were unable to recover it. 00:33:39.857 [2024-11-27 07:28:50.820842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.857 [2024-11-27 07:28:50.820870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.857 qpair failed and we were unable to recover it. 00:33:39.858 [2024-11-27 07:28:50.821234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.858 [2024-11-27 07:28:50.821265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.858 qpair failed and we were unable to recover it. 00:33:39.858 [2024-11-27 07:28:50.821648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.858 [2024-11-27 07:28:50.821683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.858 qpair failed and we were unable to recover it. 00:33:39.858 [2024-11-27 07:28:50.822023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.858 [2024-11-27 07:28:50.822051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.858 qpair failed and we were unable to recover it. 00:33:39.858 [2024-11-27 07:28:50.822400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.858 [2024-11-27 07:28:50.822430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.858 qpair failed and we were unable to recover it. 00:33:39.858 [2024-11-27 07:28:50.822799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.858 [2024-11-27 07:28:50.822827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.858 qpair failed and we were unable to recover it. 00:33:39.858 [2024-11-27 07:28:50.823193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.858 [2024-11-27 07:28:50.823224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.858 qpair failed and we were unable to recover it. 00:33:39.858 [2024-11-27 07:28:50.823622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.858 [2024-11-27 07:28:50.823651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.858 qpair failed and we were unable to recover it. 00:33:39.858 [2024-11-27 07:28:50.824027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.858 [2024-11-27 07:28:50.824055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.858 qpair failed and we were unable to recover it. 00:33:39.858 [2024-11-27 07:28:50.824417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.858 [2024-11-27 07:28:50.824446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.858 qpair failed and we were unable to recover it. 00:33:39.858 [2024-11-27 07:28:50.824778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.858 [2024-11-27 07:28:50.824807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.858 qpair failed and we were unable to recover it. 00:33:39.858 [2024-11-27 07:28:50.825177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.858 [2024-11-27 07:28:50.825207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.858 qpair failed and we were unable to recover it. 00:33:39.858 [2024-11-27 07:28:50.825508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.858 [2024-11-27 07:28:50.825536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.858 qpair failed and we were unable to recover it. 00:33:39.858 [2024-11-27 07:28:50.825901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.858 [2024-11-27 07:28:50.825929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.858 qpair failed and we were unable to recover it. 00:33:39.858 [2024-11-27 07:28:50.826304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.858 [2024-11-27 07:28:50.826334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.858 qpair failed and we were unable to recover it. 00:33:39.858 [2024-11-27 07:28:50.826710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.858 [2024-11-27 07:28:50.826738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.858 qpair failed and we were unable to recover it. 00:33:39.858 [2024-11-27 07:28:50.826995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.858 [2024-11-27 07:28:50.827023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.858 qpair failed and we were unable to recover it. 00:33:39.858 [2024-11-27 07:28:50.827382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.858 [2024-11-27 07:28:50.827412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.858 qpair failed and we were unable to recover it. 00:33:39.858 [2024-11-27 07:28:50.827782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.858 [2024-11-27 07:28:50.827810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.858 qpair failed and we were unable to recover it. 00:33:39.858 [2024-11-27 07:28:50.828180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.858 [2024-11-27 07:28:50.828209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.858 qpair failed and we were unable to recover it. 00:33:39.858 [2024-11-27 07:28:50.828571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.858 [2024-11-27 07:28:50.828599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.858 qpair failed and we were unable to recover it. 00:33:39.858 [2024-11-27 07:28:50.828946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.858 [2024-11-27 07:28:50.828975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.858 qpair failed and we were unable to recover it. 00:33:39.858 [2024-11-27 07:28:50.829320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.858 [2024-11-27 07:28:50.829349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.858 qpair failed and we were unable to recover it. 00:33:39.858 [2024-11-27 07:28:50.829712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.858 [2024-11-27 07:28:50.829742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.858 qpair failed and we were unable to recover it. 00:33:39.858 [2024-11-27 07:28:50.830100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.858 [2024-11-27 07:28:50.830127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.858 qpair failed and we were unable to recover it. 00:33:39.858 [2024-11-27 07:28:50.830555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.858 [2024-11-27 07:28:50.830586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.858 qpair failed and we were unable to recover it. 00:33:39.858 [2024-11-27 07:28:50.830962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.858 [2024-11-27 07:28:50.830991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.858 qpair failed and we were unable to recover it. 00:33:39.858 [2024-11-27 07:28:50.831383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.858 [2024-11-27 07:28:50.831413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.858 qpair failed and we were unable to recover it. 00:33:39.858 [2024-11-27 07:28:50.831771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.858 [2024-11-27 07:28:50.831799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.858 qpair failed and we were unable to recover it. 00:33:39.858 [2024-11-27 07:28:50.832180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.858 [2024-11-27 07:28:50.832210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.858 qpair failed and we were unable to recover it. 00:33:39.858 [2024-11-27 07:28:50.832567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.858 [2024-11-27 07:28:50.832595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.858 qpair failed and we were unable to recover it. 00:33:39.858 [2024-11-27 07:28:50.832952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.858 [2024-11-27 07:28:50.832980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.858 qpair failed and we were unable to recover it. 00:33:39.858 [2024-11-27 07:28:50.833322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.858 [2024-11-27 07:28:50.833353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.858 qpair failed and we were unable to recover it. 00:33:39.858 [2024-11-27 07:28:50.833608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.858 [2024-11-27 07:28:50.833638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.858 qpair failed and we were unable to recover it. 00:33:39.858 [2024-11-27 07:28:50.833887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.858 [2024-11-27 07:28:50.833916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.858 qpair failed and we were unable to recover it. 00:33:39.858 [2024-11-27 07:28:50.834057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.858 [2024-11-27 07:28:50.834089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.858 qpair failed and we were unable to recover it. 00:33:39.858 [2024-11-27 07:28:50.834460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.858 [2024-11-27 07:28:50.834490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.858 qpair failed and we were unable to recover it. 00:33:39.858 [2024-11-27 07:28:50.834860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.858 [2024-11-27 07:28:50.834889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.858 qpair failed and we were unable to recover it. 00:33:39.858 [2024-11-27 07:28:50.835248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.858 [2024-11-27 07:28:50.835278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.858 qpair failed and we were unable to recover it. 00:33:39.859 [2024-11-27 07:28:50.835641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.859 [2024-11-27 07:28:50.835670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.859 qpair failed and we were unable to recover it. 00:33:39.859 [2024-11-27 07:28:50.836019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.859 [2024-11-27 07:28:50.836048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.859 qpair failed and we were unable to recover it. 00:33:39.859 [2024-11-27 07:28:50.836406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.859 [2024-11-27 07:28:50.836436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.859 qpair failed and we were unable to recover it. 00:33:39.859 [2024-11-27 07:28:50.836796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.859 [2024-11-27 07:28:50.836824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.859 qpair failed and we were unable to recover it. 00:33:39.859 [2024-11-27 07:28:50.837060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.859 [2024-11-27 07:28:50.837094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.859 qpair failed and we were unable to recover it. 00:33:39.859 [2024-11-27 07:28:50.837464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.859 [2024-11-27 07:28:50.837494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.859 qpair failed and we were unable to recover it. 00:33:39.859 [2024-11-27 07:28:50.837858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.859 [2024-11-27 07:28:50.837886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.859 qpair failed and we were unable to recover it. 00:33:39.859 [2024-11-27 07:28:50.838213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.859 [2024-11-27 07:28:50.838242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.859 qpair failed and we were unable to recover it. 00:33:39.859 [2024-11-27 07:28:50.838682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.859 [2024-11-27 07:28:50.838710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.859 qpair failed and we were unable to recover it. 00:33:39.859 [2024-11-27 07:28:50.839073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.859 [2024-11-27 07:28:50.839102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.859 qpair failed and we were unable to recover it. 00:33:39.859 [2024-11-27 07:28:50.839476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.859 [2024-11-27 07:28:50.839505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.859 qpair failed and we were unable to recover it. 00:33:39.859 [2024-11-27 07:28:50.839851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.859 [2024-11-27 07:28:50.839880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.859 qpair failed and we were unable to recover it. 00:33:39.859 [2024-11-27 07:28:50.840241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.859 [2024-11-27 07:28:50.840273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.859 qpair failed and we were unable to recover it. 00:33:39.859 [2024-11-27 07:28:50.840634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.859 [2024-11-27 07:28:50.840662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.859 qpair failed and we were unable to recover it. 00:33:39.859 [2024-11-27 07:28:50.841033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.859 [2024-11-27 07:28:50.841062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.859 qpair failed and we were unable to recover it. 00:33:39.859 [2024-11-27 07:28:50.841418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.859 [2024-11-27 07:28:50.841448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.859 qpair failed and we were unable to recover it. 00:33:39.859 [2024-11-27 07:28:50.841818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.859 [2024-11-27 07:28:50.841846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.859 qpair failed and we were unable to recover it. 00:33:39.859 [2024-11-27 07:28:50.842215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.859 [2024-11-27 07:28:50.842245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.859 qpair failed and we were unable to recover it. 00:33:39.859 [2024-11-27 07:28:50.842619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.859 [2024-11-27 07:28:50.842648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.859 qpair failed and we were unable to recover it. 00:33:39.859 [2024-11-27 07:28:50.843022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.859 [2024-11-27 07:28:50.843051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.859 qpair failed and we were unable to recover it. 00:33:39.859 [2024-11-27 07:28:50.843405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.859 [2024-11-27 07:28:50.843435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.859 qpair failed and we were unable to recover it. 00:33:39.859 [2024-11-27 07:28:50.843799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.859 [2024-11-27 07:28:50.843829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.859 qpair failed and we were unable to recover it. 00:33:39.859 [2024-11-27 07:28:50.844203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.859 [2024-11-27 07:28:50.844233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.859 qpair failed and we were unable to recover it. 00:33:39.859 [2024-11-27 07:28:50.844618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.859 [2024-11-27 07:28:50.844646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.859 qpair failed and we were unable to recover it. 00:33:39.859 [2024-11-27 07:28:50.845005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.859 [2024-11-27 07:28:50.845033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.859 qpair failed and we were unable to recover it. 00:33:39.859 [2024-11-27 07:28:50.845386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.859 [2024-11-27 07:28:50.845416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.859 qpair failed and we were unable to recover it. 00:33:39.859 [2024-11-27 07:28:50.845781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.859 [2024-11-27 07:28:50.845809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.859 qpair failed and we were unable to recover it. 00:33:39.859 [2024-11-27 07:28:50.846180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.859 [2024-11-27 07:28:50.846210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.859 qpair failed and we were unable to recover it. 00:33:39.859 [2024-11-27 07:28:50.846557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.859 [2024-11-27 07:28:50.846587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.859 qpair failed and we were unable to recover it. 00:33:39.859 [2024-11-27 07:28:50.846927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.859 [2024-11-27 07:28:50.846955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.859 qpair failed and we were unable to recover it. 00:33:39.859 [2024-11-27 07:28:50.847328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.859 [2024-11-27 07:28:50.847358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.860 qpair failed and we were unable to recover it. 00:33:39.860 [2024-11-27 07:28:50.847725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.860 [2024-11-27 07:28:50.847760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.860 qpair failed and we were unable to recover it. 00:33:39.860 [2024-11-27 07:28:50.848123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.860 [2024-11-27 07:28:50.848151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.860 qpair failed and we were unable to recover it. 00:33:39.860 [2024-11-27 07:28:50.848663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.860 [2024-11-27 07:28:50.848694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.860 qpair failed and we were unable to recover it. 00:33:39.860 [2024-11-27 07:28:50.849073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.860 [2024-11-27 07:28:50.849101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.860 qpair failed and we were unable to recover it. 00:33:39.860 [2024-11-27 07:28:50.849530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.860 [2024-11-27 07:28:50.849560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.860 qpair failed and we were unable to recover it. 00:33:39.860 [2024-11-27 07:28:50.849913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.860 [2024-11-27 07:28:50.849943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.860 qpair failed and we were unable to recover it. 00:33:39.860 [2024-11-27 07:28:50.850286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.860 [2024-11-27 07:28:50.850317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.860 qpair failed and we were unable to recover it. 00:33:39.860 [2024-11-27 07:28:50.850623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.860 [2024-11-27 07:28:50.850651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.860 qpair failed and we were unable to recover it. 00:33:39.860 [2024-11-27 07:28:50.850998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.860 [2024-11-27 07:28:50.851026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.860 qpair failed and we were unable to recover it. 00:33:39.860 [2024-11-27 07:28:50.851386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.860 [2024-11-27 07:28:50.851417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.860 qpair failed and we were unable to recover it. 00:33:39.860 [2024-11-27 07:28:50.851766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.860 [2024-11-27 07:28:50.851795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.860 qpair failed and we were unable to recover it. 00:33:39.860 [2024-11-27 07:28:50.852151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.860 [2024-11-27 07:28:50.852192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.860 qpair failed and we were unable to recover it. 00:33:39.860 [2024-11-27 07:28:50.852456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.860 [2024-11-27 07:28:50.852488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.860 qpair failed and we were unable to recover it. 00:33:39.860 [2024-11-27 07:28:50.852853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.860 [2024-11-27 07:28:50.852881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.860 qpair failed and we were unable to recover it. 00:33:39.860 [2024-11-27 07:28:50.853242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.860 [2024-11-27 07:28:50.853271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.860 qpair failed and we were unable to recover it. 00:33:39.860 [2024-11-27 07:28:50.853626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.860 [2024-11-27 07:28:50.853654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.860 qpair failed and we were unable to recover it. 00:33:39.860 [2024-11-27 07:28:50.854021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.860 [2024-11-27 07:28:50.854049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.860 qpair failed and we were unable to recover it. 00:33:39.860 [2024-11-27 07:28:50.854415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.860 [2024-11-27 07:28:50.854445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.860 qpair failed and we were unable to recover it. 00:33:39.860 [2024-11-27 07:28:50.854803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.860 [2024-11-27 07:28:50.854832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.860 qpair failed and we were unable to recover it. 00:33:39.860 [2024-11-27 07:28:50.855192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.860 [2024-11-27 07:28:50.855222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.860 qpair failed and we were unable to recover it. 00:33:39.860 [2024-11-27 07:28:50.855629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.860 [2024-11-27 07:28:50.855657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.860 qpair failed and we were unable to recover it. 00:33:39.860 [2024-11-27 07:28:50.856022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.860 [2024-11-27 07:28:50.856051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.860 qpair failed and we were unable to recover it. 00:33:39.860 [2024-11-27 07:28:50.856415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.860 [2024-11-27 07:28:50.856444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.860 qpair failed and we were unable to recover it. 00:33:39.860 [2024-11-27 07:28:50.856792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.860 [2024-11-27 07:28:50.856820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.860 qpair failed and we were unable to recover it. 00:33:39.860 [2024-11-27 07:28:50.857190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.860 [2024-11-27 07:28:50.857220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.860 qpair failed and we were unable to recover it. 00:33:39.860 [2024-11-27 07:28:50.857573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.860 [2024-11-27 07:28:50.857601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.860 qpair failed and we were unable to recover it. 00:33:39.860 [2024-11-27 07:28:50.857966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.860 [2024-11-27 07:28:50.857994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.860 qpair failed and we were unable to recover it. 00:33:39.860 [2024-11-27 07:28:50.858339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.860 [2024-11-27 07:28:50.858378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.860 qpair failed and we were unable to recover it. 00:33:39.860 [2024-11-27 07:28:50.858719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.860 [2024-11-27 07:28:50.858747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.860 qpair failed and we were unable to recover it. 00:33:39.860 [2024-11-27 07:28:50.859122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.860 [2024-11-27 07:28:50.859151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.860 qpair failed and we were unable to recover it. 00:33:39.860 [2024-11-27 07:28:50.859393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.860 [2024-11-27 07:28:50.859423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.860 qpair failed and we were unable to recover it. 00:33:39.860 [2024-11-27 07:28:50.859681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.860 [2024-11-27 07:28:50.859710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.860 qpair failed and we were unable to recover it. 00:33:39.860 [2024-11-27 07:28:50.860052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.860 [2024-11-27 07:28:50.860082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.860 qpair failed and we were unable to recover it. 00:33:39.860 [2024-11-27 07:28:50.860442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.860 [2024-11-27 07:28:50.860471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.860 qpair failed and we were unable to recover it. 00:33:39.860 [2024-11-27 07:28:50.860722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.860 [2024-11-27 07:28:50.860753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.860 qpair failed and we were unable to recover it. 00:33:39.860 [2024-11-27 07:28:50.861117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.860 [2024-11-27 07:28:50.861148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.860 qpair failed and we were unable to recover it. 00:33:39.860 [2024-11-27 07:28:50.861519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.860 [2024-11-27 07:28:50.861548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.860 qpair failed and we were unable to recover it. 00:33:39.860 [2024-11-27 07:28:50.861889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.861918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.861 [2024-11-27 07:28:50.862278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.862309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.861 [2024-11-27 07:28:50.862645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.862674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.861 [2024-11-27 07:28:50.863020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.863048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.861 [2024-11-27 07:28:50.863310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.863340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.861 [2024-11-27 07:28:50.863685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.863714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.861 [2024-11-27 07:28:50.864074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.864104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.861 [2024-11-27 07:28:50.864327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.864360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.861 [2024-11-27 07:28:50.864710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.864739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.861 [2024-11-27 07:28:50.865087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.865115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.861 [2024-11-27 07:28:50.865524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.865554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.861 [2024-11-27 07:28:50.865829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.865858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.861 [2024-11-27 07:28:50.866236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.866266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.861 [2024-11-27 07:28:50.866625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.866653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.861 [2024-11-27 07:28:50.867013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.867042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.861 [2024-11-27 07:28:50.867410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.867439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.861 [2024-11-27 07:28:50.867799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.867828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.861 [2024-11-27 07:28:50.868196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.868226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.861 [2024-11-27 07:28:50.868625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.868654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.861 [2024-11-27 07:28:50.869010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.869040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.861 [2024-11-27 07:28:50.869394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.869423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.861 [2024-11-27 07:28:50.869790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.869818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.861 [2024-11-27 07:28:50.870183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.870213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.861 [2024-11-27 07:28:50.870578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.870607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.861 [2024-11-27 07:28:50.871044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.871072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.861 [2024-11-27 07:28:50.871407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.871439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.861 [2024-11-27 07:28:50.871806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.871835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.861 [2024-11-27 07:28:50.872208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.872238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.861 [2024-11-27 07:28:50.872614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.872642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.861 [2024-11-27 07:28:50.873003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.873031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.861 [2024-11-27 07:28:50.873403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.873432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.861 [2024-11-27 07:28:50.873788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.873817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.861 [2024-11-27 07:28:50.874198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.874228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.861 [2024-11-27 07:28:50.874611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.874639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.861 [2024-11-27 07:28:50.875004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.875033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.861 [2024-11-27 07:28:50.875384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.875415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.861 [2024-11-27 07:28:50.875776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.875804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.861 [2024-11-27 07:28:50.876178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.876208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.861 [2024-11-27 07:28:50.876580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.861 [2024-11-27 07:28:50.876608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.861 qpair failed and we were unable to recover it. 00:33:39.862 [2024-11-27 07:28:50.876982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.862 [2024-11-27 07:28:50.877011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.862 qpair failed and we were unable to recover it. 00:33:39.862 [2024-11-27 07:28:50.877480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.862 [2024-11-27 07:28:50.877509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.862 qpair failed and we were unable to recover it. 00:33:39.862 [2024-11-27 07:28:50.877870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.862 [2024-11-27 07:28:50.877898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.862 qpair failed and we were unable to recover it. 00:33:39.862 [2024-11-27 07:28:50.878266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.862 [2024-11-27 07:28:50.878296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.862 qpair failed and we were unable to recover it. 00:33:39.862 [2024-11-27 07:28:50.878666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.862 [2024-11-27 07:28:50.878694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.862 qpair failed and we were unable to recover it. 00:33:39.862 [2024-11-27 07:28:50.879060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.862 [2024-11-27 07:28:50.879087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.862 qpair failed and we were unable to recover it. 00:33:39.862 [2024-11-27 07:28:50.879460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.862 [2024-11-27 07:28:50.879491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.862 qpair failed and we were unable to recover it. 00:33:39.862 [2024-11-27 07:28:50.879860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.862 [2024-11-27 07:28:50.879889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.862 qpair failed and we were unable to recover it. 00:33:39.862 [2024-11-27 07:28:50.880255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.862 [2024-11-27 07:28:50.880284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.862 qpair failed and we were unable to recover it. 00:33:39.862 [2024-11-27 07:28:50.880649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.862 [2024-11-27 07:28:50.880677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.862 qpair failed and we were unable to recover it. 00:33:39.862 [2024-11-27 07:28:50.881050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.862 [2024-11-27 07:28:50.881078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.862 qpair failed and we were unable to recover it. 00:33:39.862 [2024-11-27 07:28:50.881508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.862 [2024-11-27 07:28:50.881537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.862 qpair failed and we were unable to recover it. 00:33:39.862 [2024-11-27 07:28:50.881773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.862 [2024-11-27 07:28:50.881801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.862 qpair failed and we were unable to recover it. 00:33:39.862 [2024-11-27 07:28:50.882156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.862 [2024-11-27 07:28:50.882205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.862 qpair failed and we were unable to recover it. 00:33:39.862 [2024-11-27 07:28:50.882539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.862 [2024-11-27 07:28:50.882575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.862 qpair failed and we were unable to recover it. 00:33:39.862 [2024-11-27 07:28:50.882904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.862 [2024-11-27 07:28:50.882931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.862 qpair failed and we were unable to recover it. 00:33:39.862 [2024-11-27 07:28:50.883271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.862 [2024-11-27 07:28:50.883301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.862 qpair failed and we were unable to recover it. 00:33:39.862 [2024-11-27 07:28:50.883676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.862 [2024-11-27 07:28:50.883705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.862 qpair failed and we were unable to recover it. 00:33:39.862 [2024-11-27 07:28:50.884073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.862 [2024-11-27 07:28:50.884101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.862 qpair failed and we were unable to recover it. 00:33:39.862 [2024-11-27 07:28:50.884484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.862 [2024-11-27 07:28:50.884526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.862 qpair failed and we were unable to recover it. 00:33:39.862 [2024-11-27 07:28:50.884796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.862 [2024-11-27 07:28:50.884828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.862 qpair failed and we were unable to recover it. 00:33:39.862 [2024-11-27 07:28:50.885078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.862 [2024-11-27 07:28:50.885107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.862 qpair failed and we were unable to recover it. 00:33:39.862 [2024-11-27 07:28:50.885517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.862 [2024-11-27 07:28:50.885547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.862 qpair failed and we were unable to recover it. 00:33:39.862 [2024-11-27 07:28:50.885911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.862 [2024-11-27 07:28:50.885941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.862 qpair failed and we were unable to recover it. 00:33:39.862 [2024-11-27 07:28:50.886312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.862 [2024-11-27 07:28:50.886342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.862 qpair failed and we were unable to recover it. 00:33:39.862 [2024-11-27 07:28:50.886699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.862 [2024-11-27 07:28:50.886729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.862 qpair failed and we were unable to recover it. 00:33:39.862 [2024-11-27 07:28:50.887085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.862 [2024-11-27 07:28:50.887113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.862 qpair failed and we were unable to recover it. 00:33:39.862 [2024-11-27 07:28:50.887475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.862 [2024-11-27 07:28:50.887507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.862 qpair failed and we were unable to recover it. 00:33:39.862 [2024-11-27 07:28:50.887944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.862 [2024-11-27 07:28:50.887973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.862 qpair failed and we were unable to recover it. 00:33:39.862 [2024-11-27 07:28:50.888343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.862 [2024-11-27 07:28:50.888372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.862 qpair failed and we were unable to recover it. 00:33:39.862 [2024-11-27 07:28:50.888731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.862 [2024-11-27 07:28:50.888760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.862 qpair failed and we were unable to recover it. 00:33:39.862 [2024-11-27 07:28:50.889106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.862 [2024-11-27 07:28:50.889135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.862 qpair failed and we were unable to recover it. 00:33:39.862 [2024-11-27 07:28:50.889500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.862 [2024-11-27 07:28:50.889529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.862 qpair failed and we were unable to recover it. 00:33:39.862 [2024-11-27 07:28:50.889894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.862 [2024-11-27 07:28:50.889923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.862 qpair failed and we were unable to recover it. 00:33:39.862 [2024-11-27 07:28:50.890285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.862 [2024-11-27 07:28:50.890315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.862 qpair failed and we were unable to recover it. 00:33:39.862 [2024-11-27 07:28:50.890608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.862 [2024-11-27 07:28:50.890636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.862 qpair failed and we were unable to recover it. 00:33:39.862 [2024-11-27 07:28:50.890999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.862 [2024-11-27 07:28:50.891027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.862 qpair failed and we were unable to recover it. 00:33:39.862 [2024-11-27 07:28:50.891472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.862 [2024-11-27 07:28:50.891503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.863 [2024-11-27 07:28:50.891857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.863 [2024-11-27 07:28:50.891885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.863 [2024-11-27 07:28:50.892234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.863 [2024-11-27 07:28:50.892264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.863 [2024-11-27 07:28:50.892634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.863 [2024-11-27 07:28:50.892662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.863 [2024-11-27 07:28:50.893023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.863 [2024-11-27 07:28:50.893052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.863 [2024-11-27 07:28:50.893299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.863 [2024-11-27 07:28:50.893333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.863 [2024-11-27 07:28:50.893689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.863 [2024-11-27 07:28:50.893718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.863 [2024-11-27 07:28:50.894080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.863 [2024-11-27 07:28:50.894109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.863 [2024-11-27 07:28:50.894465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.863 [2024-11-27 07:28:50.894495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.863 [2024-11-27 07:28:50.894860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.863 [2024-11-27 07:28:50.894896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.863 [2024-11-27 07:28:50.895255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.863 [2024-11-27 07:28:50.895285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.863 [2024-11-27 07:28:50.895539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.863 [2024-11-27 07:28:50.895567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.863 [2024-11-27 07:28:50.895800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.863 [2024-11-27 07:28:50.895829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.863 [2024-11-27 07:28:50.896206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.863 [2024-11-27 07:28:50.896236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.863 [2024-11-27 07:28:50.896586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.863 [2024-11-27 07:28:50.896616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.863 [2024-11-27 07:28:50.896979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.863 [2024-11-27 07:28:50.897007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.863 [2024-11-27 07:28:50.897393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.863 [2024-11-27 07:28:50.897422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.863 [2024-11-27 07:28:50.897763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.863 [2024-11-27 07:28:50.897792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.863 [2024-11-27 07:28:50.898248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.863 [2024-11-27 07:28:50.898278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.863 [2024-11-27 07:28:50.898541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.863 [2024-11-27 07:28:50.898573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.863 [2024-11-27 07:28:50.898895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.863 [2024-11-27 07:28:50.898923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.863 [2024-11-27 07:28:50.899180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.863 [2024-11-27 07:28:50.899211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.863 [2024-11-27 07:28:50.899594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.863 [2024-11-27 07:28:50.899622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.863 [2024-11-27 07:28:50.899984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.863 [2024-11-27 07:28:50.900012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.863 [2024-11-27 07:28:50.900396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.863 [2024-11-27 07:28:50.900428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.863 [2024-11-27 07:28:50.900837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.863 [2024-11-27 07:28:50.900866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.863 [2024-11-27 07:28:50.901236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.863 [2024-11-27 07:28:50.901266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.863 [2024-11-27 07:28:50.901633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.863 [2024-11-27 07:28:50.901662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.863 [2024-11-27 07:28:50.902031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.863 [2024-11-27 07:28:50.902059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.863 [2024-11-27 07:28:50.902332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.863 [2024-11-27 07:28:50.902361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.863 [2024-11-27 07:28:50.902708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.863 [2024-11-27 07:28:50.902736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.863 [2024-11-27 07:28:50.903116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.863 [2024-11-27 07:28:50.903145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.863 [2024-11-27 07:28:50.903522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.863 [2024-11-27 07:28:50.903551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.863 [2024-11-27 07:28:50.903920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.863 [2024-11-27 07:28:50.903949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.863 [2024-11-27 07:28:50.904308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.863 [2024-11-27 07:28:50.904339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.863 [2024-11-27 07:28:50.904710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.863 [2024-11-27 07:28:50.904738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.863 [2024-11-27 07:28:50.905102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.863 [2024-11-27 07:28:50.905131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.863 [2024-11-27 07:28:50.905526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.863 [2024-11-27 07:28:50.905555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.863 [2024-11-27 07:28:50.905908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.863 [2024-11-27 07:28:50.905936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.863 qpair failed and we were unable to recover it. 00:33:39.864 [2024-11-27 07:28:50.906290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.864 [2024-11-27 07:28:50.906320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.864 qpair failed and we were unable to recover it. 00:33:39.864 [2024-11-27 07:28:50.906695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.864 [2024-11-27 07:28:50.906723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.864 qpair failed and we were unable to recover it. 00:33:39.864 [2024-11-27 07:28:50.907177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.864 [2024-11-27 07:28:50.907208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.864 qpair failed and we were unable to recover it. 00:33:39.864 [2024-11-27 07:28:50.907565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.864 [2024-11-27 07:28:50.907595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.864 qpair failed and we were unable to recover it. 00:33:39.864 [2024-11-27 07:28:50.907935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.864 [2024-11-27 07:28:50.907964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.864 qpair failed and we were unable to recover it. 00:33:39.864 [2024-11-27 07:28:50.908325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.864 [2024-11-27 07:28:50.908356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.864 qpair failed and we were unable to recover it. 00:33:39.864 [2024-11-27 07:28:50.908720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.864 [2024-11-27 07:28:50.908749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.864 qpair failed and we were unable to recover it. 00:33:39.864 [2024-11-27 07:28:50.909101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.864 [2024-11-27 07:28:50.909130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.864 qpair failed and we were unable to recover it. 00:33:39.864 [2024-11-27 07:28:50.909483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.864 [2024-11-27 07:28:50.909513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.864 qpair failed and we were unable to recover it. 00:33:39.864 [2024-11-27 07:28:50.909823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.864 [2024-11-27 07:28:50.909851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.864 qpair failed and we were unable to recover it. 00:33:39.864 [2024-11-27 07:28:50.910090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.864 [2024-11-27 07:28:50.910120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.864 qpair failed and we were unable to recover it. 00:33:39.864 [2024-11-27 07:28:50.910545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.864 [2024-11-27 07:28:50.910575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.864 qpair failed and we were unable to recover it. 00:33:39.864 [2024-11-27 07:28:50.910979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.864 [2024-11-27 07:28:50.911009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.864 qpair failed and we were unable to recover it. 00:33:39.864 [2024-11-27 07:28:50.911251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.864 [2024-11-27 07:28:50.911284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.864 qpair failed and we were unable to recover it. 00:33:39.864 [2024-11-27 07:28:50.911629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.864 [2024-11-27 07:28:50.911658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.864 qpair failed and we were unable to recover it. 00:33:39.864 [2024-11-27 07:28:50.912035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.864 [2024-11-27 07:28:50.912064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.864 qpair failed and we were unable to recover it. 00:33:39.864 [2024-11-27 07:28:50.912413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.864 [2024-11-27 07:28:50.912444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.864 qpair failed and we were unable to recover it. 00:33:39.864 [2024-11-27 07:28:50.912787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.864 [2024-11-27 07:28:50.912816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.864 qpair failed and we were unable to recover it. 00:33:39.864 [2024-11-27 07:28:50.913180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.864 [2024-11-27 07:28:50.913210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.864 qpair failed and we were unable to recover it. 00:33:39.864 [2024-11-27 07:28:50.913569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.864 [2024-11-27 07:28:50.913598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.864 qpair failed and we were unable to recover it. 00:33:39.864 [2024-11-27 07:28:50.913943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.864 [2024-11-27 07:28:50.913972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.864 qpair failed and we were unable to recover it. 00:33:39.864 [2024-11-27 07:28:50.914340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.864 [2024-11-27 07:28:50.914370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.864 qpair failed and we were unable to recover it. 00:33:39.864 [2024-11-27 07:28:50.914732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.864 [2024-11-27 07:28:50.914761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.864 qpair failed and we were unable to recover it. 00:33:39.864 [2024-11-27 07:28:50.915124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.864 [2024-11-27 07:28:50.915153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.864 qpair failed and we were unable to recover it. 00:33:39.864 [2024-11-27 07:28:50.915500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.864 [2024-11-27 07:28:50.915529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.864 qpair failed and we were unable to recover it. 00:33:39.864 [2024-11-27 07:28:50.915891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.864 [2024-11-27 07:28:50.915920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.864 qpair failed and we were unable to recover it. 00:33:39.864 [2024-11-27 07:28:50.916280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.864 [2024-11-27 07:28:50.916310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.864 qpair failed and we were unable to recover it. 00:33:39.864 [2024-11-27 07:28:50.916673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.864 [2024-11-27 07:28:50.916702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.864 qpair failed and we were unable to recover it. 00:33:39.864 [2024-11-27 07:28:50.917061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.864 [2024-11-27 07:28:50.917089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.864 qpair failed and we were unable to recover it. 00:33:39.864 [2024-11-27 07:28:50.917454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.864 [2024-11-27 07:28:50.917483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.864 qpair failed and we were unable to recover it. 00:33:39.864 [2024-11-27 07:28:50.917923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.864 [2024-11-27 07:28:50.917952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.864 qpair failed and we were unable to recover it. 00:33:39.864 [2024-11-27 07:28:50.918205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.864 [2024-11-27 07:28:50.918239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.864 qpair failed and we were unable to recover it. 00:33:39.864 [2024-11-27 07:28:50.918601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.864 [2024-11-27 07:28:50.918629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.864 qpair failed and we were unable to recover it. 00:33:39.865 [2024-11-27 07:28:50.918979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.865 [2024-11-27 07:28:50.919008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.865 qpair failed and we were unable to recover it. 00:33:39.865 [2024-11-27 07:28:50.919389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.865 [2024-11-27 07:28:50.919419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.865 qpair failed and we were unable to recover it. 00:33:39.865 [2024-11-27 07:28:50.919780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.865 [2024-11-27 07:28:50.919808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.865 qpair failed and we were unable to recover it. 00:33:39.865 [2024-11-27 07:28:50.920175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.865 [2024-11-27 07:28:50.920205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.865 qpair failed and we were unable to recover it. 00:33:39.865 [2024-11-27 07:28:50.920539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.865 [2024-11-27 07:28:50.920569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.865 qpair failed and we were unable to recover it. 00:33:39.865 [2024-11-27 07:28:50.920937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.865 [2024-11-27 07:28:50.920972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.865 qpair failed and we were unable to recover it. 00:33:39.865 [2024-11-27 07:28:50.921226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.865 [2024-11-27 07:28:50.921256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.865 qpair failed and we were unable to recover it. 00:33:39.865 [2024-11-27 07:28:50.921619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.865 [2024-11-27 07:28:50.921649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.865 qpair failed and we were unable to recover it. 00:33:39.865 [2024-11-27 07:28:50.921993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.865 [2024-11-27 07:28:50.922023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.865 qpair failed and we were unable to recover it. 00:33:39.865 [2024-11-27 07:28:50.922393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.865 [2024-11-27 07:28:50.922422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.865 qpair failed and we were unable to recover it. 00:33:39.865 [2024-11-27 07:28:50.922672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.865 [2024-11-27 07:28:50.922700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.865 qpair failed and we were unable to recover it. 00:33:39.865 [2024-11-27 07:28:50.923049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.865 [2024-11-27 07:28:50.923078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.865 qpair failed and we were unable to recover it. 00:33:39.865 [2024-11-27 07:28:50.923463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.865 [2024-11-27 07:28:50.923493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.865 qpair failed and we were unable to recover it. 00:33:39.865 [2024-11-27 07:28:50.923860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.865 [2024-11-27 07:28:50.923888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.865 qpair failed and we were unable to recover it. 00:33:39.865 [2024-11-27 07:28:50.924255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.865 [2024-11-27 07:28:50.924285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.865 qpair failed and we were unable to recover it. 00:33:39.865 [2024-11-27 07:28:50.924668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.865 [2024-11-27 07:28:50.924696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.865 qpair failed and we were unable to recover it. 00:33:39.865 [2024-11-27 07:28:50.925069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.865 [2024-11-27 07:28:50.925097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.865 qpair failed and we were unable to recover it. 00:33:39.865 [2024-11-27 07:28:50.925467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.865 [2024-11-27 07:28:50.925497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.865 qpair failed and we were unable to recover it. 00:33:39.865 [2024-11-27 07:28:50.925858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.865 [2024-11-27 07:28:50.925885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.865 qpair failed and we were unable to recover it. 00:33:39.865 [2024-11-27 07:28:50.926248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.865 [2024-11-27 07:28:50.926278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.865 qpair failed and we were unable to recover it. 00:33:39.865 [2024-11-27 07:28:50.926646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.865 [2024-11-27 07:28:50.926683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.865 qpair failed and we were unable to recover it. 00:33:39.865 [2024-11-27 07:28:50.927017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.865 [2024-11-27 07:28:50.927045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.865 qpair failed and we were unable to recover it. 00:33:39.865 [2024-11-27 07:28:50.927399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.865 [2024-11-27 07:28:50.927428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.865 qpair failed and we were unable to recover it. 00:33:39.865 [2024-11-27 07:28:50.927792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.865 [2024-11-27 07:28:50.927820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.865 qpair failed and we were unable to recover it. 00:33:39.865 [2024-11-27 07:28:50.928190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.865 [2024-11-27 07:28:50.928219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.865 qpair failed and we were unable to recover it. 00:33:39.865 [2024-11-27 07:28:50.928564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.865 [2024-11-27 07:28:50.928594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.865 qpair failed and we were unable to recover it. 00:33:39.865 [2024-11-27 07:28:50.928977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.865 [2024-11-27 07:28:50.929005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.865 qpair failed and we were unable to recover it. 00:33:39.865 [2024-11-27 07:28:50.929365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.865 [2024-11-27 07:28:50.929395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.865 qpair failed and we were unable to recover it. 00:33:39.865 [2024-11-27 07:28:50.929759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.865 [2024-11-27 07:28:50.929788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.865 qpair failed and we were unable to recover it. 00:33:39.865 [2024-11-27 07:28:50.930154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.865 [2024-11-27 07:28:50.930193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.865 qpair failed and we were unable to recover it. 00:33:39.865 [2024-11-27 07:28:50.930536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.865 [2024-11-27 07:28:50.930565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.865 qpair failed and we were unable to recover it. 00:33:39.865 [2024-11-27 07:28:50.930812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.865 [2024-11-27 07:28:50.930844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.865 qpair failed and we were unable to recover it. 00:33:39.865 [2024-11-27 07:28:50.931230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.865 [2024-11-27 07:28:50.931266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.865 qpair failed and we were unable to recover it. 00:33:39.865 [2024-11-27 07:28:50.931640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.865 [2024-11-27 07:28:50.931669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.865 qpair failed and we were unable to recover it. 00:33:39.865 [2024-11-27 07:28:50.932024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.865 [2024-11-27 07:28:50.932054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.865 qpair failed and we were unable to recover it. 00:33:39.865 [2024-11-27 07:28:50.932418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.865 [2024-11-27 07:28:50.932448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.865 qpair failed and we were unable to recover it. 00:33:39.865 [2024-11-27 07:28:50.932808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.865 [2024-11-27 07:28:50.932837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.865 qpair failed and we were unable to recover it. 00:33:39.865 [2024-11-27 07:28:50.933209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.865 [2024-11-27 07:28:50.933240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.865 qpair failed and we were unable to recover it. 00:33:39.866 [2024-11-27 07:28:50.933617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.866 [2024-11-27 07:28:50.933646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.866 qpair failed and we were unable to recover it. 00:33:39.866 [2024-11-27 07:28:50.934080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.866 [2024-11-27 07:28:50.934109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.866 qpair failed and we were unable to recover it. 00:33:39.866 [2024-11-27 07:28:50.934536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.866 [2024-11-27 07:28:50.934567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.866 qpair failed and we were unable to recover it. 00:33:39.866 [2024-11-27 07:28:50.934805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.866 [2024-11-27 07:28:50.934837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.866 qpair failed and we were unable to recover it. 00:33:39.866 [2024-11-27 07:28:50.935207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.866 [2024-11-27 07:28:50.935238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.866 qpair failed and we were unable to recover it. 00:33:39.866 [2024-11-27 07:28:50.935637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.866 [2024-11-27 07:28:50.935667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.866 qpair failed and we were unable to recover it. 00:33:39.866 [2024-11-27 07:28:50.936019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.866 [2024-11-27 07:28:50.936049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.866 qpair failed and we were unable to recover it. 00:33:39.866 [2024-11-27 07:28:50.936416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.866 [2024-11-27 07:28:50.936446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.866 qpair failed and we were unable to recover it. 00:33:39.866 [2024-11-27 07:28:50.936806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.866 [2024-11-27 07:28:50.936835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.866 qpair failed and we were unable to recover it. 00:33:39.866 [2024-11-27 07:28:50.937207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.866 [2024-11-27 07:28:50.937238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.866 qpair failed and we were unable to recover it. 00:33:39.866 [2024-11-27 07:28:50.937587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.866 [2024-11-27 07:28:50.937616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.866 qpair failed and we were unable to recover it. 00:33:39.866 [2024-11-27 07:28:50.937974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.866 [2024-11-27 07:28:50.938003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.866 qpair failed and we were unable to recover it. 00:33:39.866 [2024-11-27 07:28:50.938387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.866 [2024-11-27 07:28:50.938417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.866 qpair failed and we were unable to recover it. 00:33:39.866 [2024-11-27 07:28:50.938774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.866 [2024-11-27 07:28:50.938803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.866 qpair failed and we were unable to recover it. 00:33:39.866 [2024-11-27 07:28:50.939155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.866 [2024-11-27 07:28:50.939195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.866 qpair failed and we were unable to recover it. 00:33:39.866 [2024-11-27 07:28:50.939432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.866 [2024-11-27 07:28:50.939464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.866 qpair failed and we were unable to recover it. 00:33:39.866 [2024-11-27 07:28:50.939808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.866 [2024-11-27 07:28:50.939837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.866 qpair failed and we were unable to recover it. 00:33:39.866 [2024-11-27 07:28:50.940087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.866 [2024-11-27 07:28:50.940116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.866 qpair failed and we were unable to recover it. 00:33:39.866 [2024-11-27 07:28:50.940509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.866 [2024-11-27 07:28:50.940540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.866 qpair failed and we were unable to recover it. 00:33:39.866 [2024-11-27 07:28:50.940895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.866 [2024-11-27 07:28:50.940923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.866 qpair failed and we were unable to recover it. 00:33:39.866 [2024-11-27 07:28:50.941286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.866 [2024-11-27 07:28:50.941317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.866 qpair failed and we were unable to recover it. 00:33:39.866 [2024-11-27 07:28:50.941680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.866 [2024-11-27 07:28:50.941717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.866 qpair failed and we were unable to recover it. 00:33:39.866 [2024-11-27 07:28:50.942073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.866 [2024-11-27 07:28:50.942102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.866 qpair failed and we were unable to recover it. 00:33:39.866 [2024-11-27 07:28:50.942460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.866 [2024-11-27 07:28:50.942490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.866 qpair failed and we were unable to recover it. 00:33:39.866 [2024-11-27 07:28:50.942739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.866 [2024-11-27 07:28:50.942768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.866 qpair failed and we were unable to recover it. 00:33:39.866 [2024-11-27 07:28:50.943120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.866 [2024-11-27 07:28:50.943149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.866 qpair failed and we were unable to recover it. 00:33:39.866 [2024-11-27 07:28:50.943517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.866 [2024-11-27 07:28:50.943548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.866 qpair failed and we were unable to recover it. 00:33:39.866 [2024-11-27 07:28:50.943897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.866 [2024-11-27 07:28:50.943927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.866 qpair failed and we were unable to recover it. 00:33:39.866 [2024-11-27 07:28:50.944291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.866 [2024-11-27 07:28:50.944321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.866 qpair failed and we were unable to recover it. 00:33:39.866 [2024-11-27 07:28:50.944695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.866 [2024-11-27 07:28:50.944725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.866 qpair failed and we were unable to recover it. 00:33:39.866 [2024-11-27 07:28:50.945087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.866 [2024-11-27 07:28:50.945115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.866 qpair failed and we were unable to recover it. 00:33:39.866 [2024-11-27 07:28:50.945482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.866 [2024-11-27 07:28:50.945512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.866 qpair failed and we were unable to recover it. 00:33:39.866 [2024-11-27 07:28:50.945877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.866 [2024-11-27 07:28:50.945907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.866 qpair failed and we were unable to recover it. 00:33:39.866 [2024-11-27 07:28:50.946273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.866 [2024-11-27 07:28:50.946302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.866 qpair failed and we were unable to recover it. 00:33:39.866 [2024-11-27 07:28:50.946671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.866 [2024-11-27 07:28:50.946699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.866 qpair failed and we were unable to recover it. 00:33:39.866 [2024-11-27 07:28:50.947073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.866 [2024-11-27 07:28:50.947101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.866 qpair failed and we were unable to recover it. 00:33:39.866 [2024-11-27 07:28:50.947474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.866 [2024-11-27 07:28:50.947503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.866 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.948055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.867 [2024-11-27 07:28:50.948083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.867 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.948489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.867 [2024-11-27 07:28:50.948518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.867 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.948874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.867 [2024-11-27 07:28:50.948902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.867 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.949253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.867 [2024-11-27 07:28:50.949282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.867 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.949679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.867 [2024-11-27 07:28:50.949707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.867 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.950053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.867 [2024-11-27 07:28:50.950081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.867 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.950475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.867 [2024-11-27 07:28:50.950505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.867 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.950899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.867 [2024-11-27 07:28:50.950927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.867 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.951277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.867 [2024-11-27 07:28:50.951306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.867 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.951560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.867 [2024-11-27 07:28:50.951589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.867 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.951919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.867 [2024-11-27 07:28:50.951948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.867 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.952314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.867 [2024-11-27 07:28:50.952344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.867 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.952706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.867 [2024-11-27 07:28:50.952734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.867 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.953098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.867 [2024-11-27 07:28:50.953125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.867 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.953507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.867 [2024-11-27 07:28:50.953538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.867 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.953903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.867 [2024-11-27 07:28:50.953931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.867 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.954183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.867 [2024-11-27 07:28:50.954217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.867 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.954561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.867 [2024-11-27 07:28:50.954591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.867 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.954962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.867 [2024-11-27 07:28:50.954990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.867 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.955328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.867 [2024-11-27 07:28:50.955360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.867 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.955689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.867 [2024-11-27 07:28:50.955718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.867 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.956074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.867 [2024-11-27 07:28:50.956103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.867 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.956466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.867 [2024-11-27 07:28:50.956496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.867 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.956866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.867 [2024-11-27 07:28:50.956895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.867 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.957246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.867 [2024-11-27 07:28:50.957276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.867 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.957633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.867 [2024-11-27 07:28:50.957664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.867 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.958008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.867 [2024-11-27 07:28:50.958037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.867 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.958382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.867 [2024-11-27 07:28:50.958412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.867 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.958819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.867 [2024-11-27 07:28:50.958847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.867 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.959215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.867 [2024-11-27 07:28:50.959246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.867 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.959512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.867 [2024-11-27 07:28:50.959540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.867 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.959881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.867 [2024-11-27 07:28:50.959911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.867 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.960294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.867 [2024-11-27 07:28:50.960323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.867 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.960691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.867 [2024-11-27 07:28:50.960719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.867 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.961089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.867 [2024-11-27 07:28:50.961117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.867 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.961364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.867 [2024-11-27 07:28:50.961397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.867 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.961790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.867 [2024-11-27 07:28:50.961819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.867 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.962185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.867 [2024-11-27 07:28:50.962215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.867 qpair failed and we were unable to recover it. 00:33:39.867 [2024-11-27 07:28:50.962573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.868 [2024-11-27 07:28:50.962601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.868 qpair failed and we were unable to recover it. 00:33:39.868 [2024-11-27 07:28:50.962970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.868 [2024-11-27 07:28:50.962999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.868 qpair failed and we were unable to recover it. 00:33:39.868 [2024-11-27 07:28:50.963380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.868 [2024-11-27 07:28:50.963410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.868 qpair failed and we were unable to recover it. 00:33:39.868 [2024-11-27 07:28:50.963769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.868 [2024-11-27 07:28:50.963798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.868 qpair failed and we were unable to recover it. 00:33:39.868 [2024-11-27 07:28:50.964051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.868 [2024-11-27 07:28:50.964079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.868 qpair failed and we were unable to recover it. 00:33:39.868 [2024-11-27 07:28:50.964447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.868 [2024-11-27 07:28:50.964477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.868 qpair failed and we were unable to recover it. 00:33:39.868 [2024-11-27 07:28:50.964836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.868 [2024-11-27 07:28:50.964866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.868 qpair failed and we were unable to recover it. 00:33:39.868 [2024-11-27 07:28:50.965107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.868 [2024-11-27 07:28:50.965140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.868 qpair failed and we were unable to recover it. 00:33:39.868 [2024-11-27 07:28:50.965520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.868 [2024-11-27 07:28:50.965549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.868 qpair failed and we were unable to recover it. 00:33:39.868 [2024-11-27 07:28:50.965904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.868 [2024-11-27 07:28:50.965933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.868 qpair failed and we were unable to recover it. 00:33:39.868 [2024-11-27 07:28:50.966292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.868 [2024-11-27 07:28:50.966322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.868 qpair failed and we were unable to recover it. 00:33:39.868 [2024-11-27 07:28:50.966687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.868 [2024-11-27 07:28:50.966716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.868 qpair failed and we were unable to recover it. 00:33:39.868 [2024-11-27 07:28:50.967079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.868 [2024-11-27 07:28:50.967108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.868 qpair failed and we were unable to recover it. 00:33:39.868 [2024-11-27 07:28:50.967486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.868 [2024-11-27 07:28:50.967516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.868 qpair failed and we were unable to recover it. 00:33:39.868 [2024-11-27 07:28:50.967768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.868 [2024-11-27 07:28:50.967804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.868 qpair failed and we were unable to recover it. 00:33:39.868 [2024-11-27 07:28:50.968179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.868 [2024-11-27 07:28:50.968211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.868 qpair failed and we were unable to recover it. 00:33:39.868 [2024-11-27 07:28:50.968573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.868 [2024-11-27 07:28:50.968603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.868 qpair failed and we were unable to recover it. 00:33:39.868 [2024-11-27 07:28:50.968961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.868 [2024-11-27 07:28:50.968990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.868 qpair failed and we were unable to recover it. 00:33:39.868 [2024-11-27 07:28:50.969239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.868 [2024-11-27 07:28:50.969274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.868 qpair failed and we were unable to recover it. 00:33:39.868 [2024-11-27 07:28:50.969623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.868 [2024-11-27 07:28:50.969652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.868 qpair failed and we were unable to recover it. 00:33:39.868 [2024-11-27 07:28:50.970021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.868 [2024-11-27 07:28:50.970051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.868 qpair failed and we were unable to recover it. 00:33:39.868 [2024-11-27 07:28:50.970408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.868 [2024-11-27 07:28:50.970438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.868 qpair failed and we were unable to recover it. 00:33:39.868 [2024-11-27 07:28:50.970788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.868 [2024-11-27 07:28:50.970817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.868 qpair failed and we were unable to recover it. 00:33:39.868 [2024-11-27 07:28:50.971089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.868 [2024-11-27 07:28:50.971117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.868 qpair failed and we were unable to recover it. 00:33:39.868 [2024-11-27 07:28:50.971505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.868 [2024-11-27 07:28:50.971534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.868 qpair failed and we were unable to recover it. 00:33:39.868 [2024-11-27 07:28:50.971903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.868 [2024-11-27 07:28:50.971932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.868 qpair failed and we were unable to recover it. 00:33:39.868 [2024-11-27 07:28:50.972192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.868 [2024-11-27 07:28:50.972221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.868 qpair failed and we were unable to recover it. 00:33:39.868 [2024-11-27 07:28:50.972611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.868 [2024-11-27 07:28:50.972639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.868 qpair failed and we were unable to recover it. 00:33:39.868 [2024-11-27 07:28:50.973002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.868 [2024-11-27 07:28:50.973031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.868 qpair failed and we were unable to recover it. 00:33:39.868 [2024-11-27 07:28:50.973389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.868 [2024-11-27 07:28:50.973419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.868 qpair failed and we were unable to recover it. 00:33:39.868 [2024-11-27 07:28:50.973757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.868 [2024-11-27 07:28:50.973786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.868 qpair failed and we were unable to recover it. 00:33:39.868 [2024-11-27 07:28:50.974098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.868 [2024-11-27 07:28:50.974126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.868 qpair failed and we were unable to recover it. 00:33:39.868 [2024-11-27 07:28:50.974378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.868 [2024-11-27 07:28:50.974407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.868 qpair failed and we were unable to recover it. 00:33:39.868 [2024-11-27 07:28:50.974758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.868 [2024-11-27 07:28:50.974786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.868 qpair failed and we were unable to recover it. 00:33:39.868 [2024-11-27 07:28:50.975144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.868 [2024-11-27 07:28:50.975182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.869 [2024-11-27 07:28:50.975525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.869 [2024-11-27 07:28:50.975554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.869 [2024-11-27 07:28:50.975920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.869 [2024-11-27 07:28:50.975950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.869 [2024-11-27 07:28:50.976226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.869 [2024-11-27 07:28:50.976257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.869 [2024-11-27 07:28:50.976506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.869 [2024-11-27 07:28:50.976535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.869 [2024-11-27 07:28:50.976960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.869 [2024-11-27 07:28:50.976991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.869 [2024-11-27 07:28:50.977340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.869 [2024-11-27 07:28:50.977371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.869 [2024-11-27 07:28:50.977741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.869 [2024-11-27 07:28:50.977777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.869 [2024-11-27 07:28:50.978140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.869 [2024-11-27 07:28:50.978182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.869 [2024-11-27 07:28:50.978546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.869 [2024-11-27 07:28:50.978576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.869 [2024-11-27 07:28:50.979017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.869 [2024-11-27 07:28:50.979046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.869 [2024-11-27 07:28:50.979411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.869 [2024-11-27 07:28:50.979442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.869 [2024-11-27 07:28:50.979790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.869 [2024-11-27 07:28:50.979820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.869 [2024-11-27 07:28:50.980181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.869 [2024-11-27 07:28:50.980211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.869 [2024-11-27 07:28:50.980599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.869 [2024-11-27 07:28:50.980629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.869 [2024-11-27 07:28:50.980977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.869 [2024-11-27 07:28:50.981005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.869 [2024-11-27 07:28:50.981385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.869 [2024-11-27 07:28:50.981417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.869 [2024-11-27 07:28:50.981785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.869 [2024-11-27 07:28:50.981817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.869 [2024-11-27 07:28:50.982177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.869 [2024-11-27 07:28:50.982207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.869 [2024-11-27 07:28:50.982548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.869 [2024-11-27 07:28:50.982577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.869 [2024-11-27 07:28:50.982962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.869 [2024-11-27 07:28:50.982991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.869 [2024-11-27 07:28:50.983339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.869 [2024-11-27 07:28:50.983372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.869 [2024-11-27 07:28:50.983539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.869 [2024-11-27 07:28:50.983567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.869 [2024-11-27 07:28:50.983931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.869 [2024-11-27 07:28:50.983960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.869 [2024-11-27 07:28:50.984303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.869 [2024-11-27 07:28:50.984335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.869 [2024-11-27 07:28:50.984696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.869 [2024-11-27 07:28:50.984725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.869 [2024-11-27 07:28:50.985069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.869 [2024-11-27 07:28:50.985098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.869 [2024-11-27 07:28:50.985364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.869 [2024-11-27 07:28:50.985395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.869 [2024-11-27 07:28:50.985644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.869 [2024-11-27 07:28:50.985672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.869 [2024-11-27 07:28:50.986025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.869 [2024-11-27 07:28:50.986054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.869 [2024-11-27 07:28:50.986451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.869 [2024-11-27 07:28:50.986483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.869 [2024-11-27 07:28:50.986845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.869 [2024-11-27 07:28:50.986876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.869 [2024-11-27 07:28:50.987130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.869 [2024-11-27 07:28:50.987172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.869 [2024-11-27 07:28:50.987535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.869 [2024-11-27 07:28:50.987565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.869 [2024-11-27 07:28:50.987868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.869 [2024-11-27 07:28:50.987896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.869 [2024-11-27 07:28:50.988240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.869 [2024-11-27 07:28:50.988270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.869 [2024-11-27 07:28:50.988627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.869 [2024-11-27 07:28:50.988656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.869 [2024-11-27 07:28:50.989010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.869 [2024-11-27 07:28:50.989040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.869 [2024-11-27 07:28:50.989448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.869 [2024-11-27 07:28:50.989478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.869 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:50.989835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.870 [2024-11-27 07:28:50.989864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.870 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:50.990223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.870 [2024-11-27 07:28:50.990254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.870 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:50.990642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.870 [2024-11-27 07:28:50.990671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.870 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:50.991116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.870 [2024-11-27 07:28:50.991145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.870 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:50.991521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.870 [2024-11-27 07:28:50.991550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.870 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:50.991804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.870 [2024-11-27 07:28:50.991836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.870 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:50.992198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.870 [2024-11-27 07:28:50.992230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.870 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:50.992688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.870 [2024-11-27 07:28:50.992717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.870 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:50.993078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.870 [2024-11-27 07:28:50.993111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.870 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:50.993400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.870 [2024-11-27 07:28:50.993433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.870 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:50.993842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.870 [2024-11-27 07:28:50.993871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.870 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:50.994303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.870 [2024-11-27 07:28:50.994334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.870 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:50.994701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.870 [2024-11-27 07:28:50.994734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.870 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:50.995126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.870 [2024-11-27 07:28:50.995157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.870 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:50.995518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.870 [2024-11-27 07:28:50.995547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.870 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:50.995907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.870 [2024-11-27 07:28:50.995938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.870 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:50.996300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.870 [2024-11-27 07:28:50.996329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.870 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:50.996639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.870 [2024-11-27 07:28:50.996667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.870 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:50.997031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.870 [2024-11-27 07:28:50.997062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.870 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:50.997494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.870 [2024-11-27 07:28:50.997523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.870 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:50.997886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.870 [2024-11-27 07:28:50.997916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.870 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:50.998278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.870 [2024-11-27 07:28:50.998310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.870 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:50.998687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.870 [2024-11-27 07:28:50.998717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.870 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:50.999092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.870 [2024-11-27 07:28:50.999121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.870 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:50.999317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.870 [2024-11-27 07:28:50.999349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.870 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:50.999712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.870 [2024-11-27 07:28:50.999741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.870 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:51.000108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.870 [2024-11-27 07:28:51.000139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.870 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:51.000507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.870 [2024-11-27 07:28:51.000537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.870 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:51.000889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.870 [2024-11-27 07:28:51.000920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.870 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:51.001287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.870 [2024-11-27 07:28:51.001319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.870 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:51.001722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.870 [2024-11-27 07:28:51.001753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.870 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:51.002111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.870 [2024-11-27 07:28:51.002140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.870 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:51.002579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.870 [2024-11-27 07:28:51.002609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.870 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:51.002866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.870 [2024-11-27 07:28:51.002896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.870 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:51.003237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.870 [2024-11-27 07:28:51.003268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.870 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:51.003527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.870 [2024-11-27 07:28:51.003555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.870 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:51.003900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.870 [2024-11-27 07:28:51.003936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.870 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:51.004283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.870 [2024-11-27 07:28:51.004315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.870 qpair failed and we were unable to recover it. 00:33:39.870 [2024-11-27 07:28:51.004713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.004744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.871 qpair failed and we were unable to recover it. 00:33:39.871 [2024-11-27 07:28:51.005089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.005119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.871 qpair failed and we were unable to recover it. 00:33:39.871 [2024-11-27 07:28:51.005461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.005492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.871 qpair failed and we were unable to recover it. 00:33:39.871 [2024-11-27 07:28:51.005863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.005892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.871 qpair failed and we were unable to recover it. 00:33:39.871 [2024-11-27 07:28:51.006170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.006201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.871 qpair failed and we were unable to recover it. 00:33:39.871 [2024-11-27 07:28:51.006571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.006602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.871 qpair failed and we were unable to recover it. 00:33:39.871 [2024-11-27 07:28:51.006969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.006999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.871 qpair failed and we were unable to recover it. 00:33:39.871 [2024-11-27 07:28:51.008836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.008904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.871 qpair failed and we were unable to recover it. 00:33:39.871 [2024-11-27 07:28:51.009341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.009373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.871 qpair failed and we were unable to recover it. 00:33:39.871 [2024-11-27 07:28:51.009820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.009851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.871 qpair failed and we were unable to recover it. 00:33:39.871 [2024-11-27 07:28:51.010230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.010267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.871 qpair failed and we were unable to recover it. 00:33:39.871 [2024-11-27 07:28:51.010630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.010658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.871 qpair failed and we were unable to recover it. 00:33:39.871 [2024-11-27 07:28:51.011028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.011057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.871 qpair failed and we were unable to recover it. 00:33:39.871 [2024-11-27 07:28:51.011421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.011452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.871 qpair failed and we were unable to recover it. 00:33:39.871 [2024-11-27 07:28:51.011821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.011850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.871 qpair failed and we were unable to recover it. 00:33:39.871 [2024-11-27 07:28:51.012222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.012253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.871 qpair failed and we were unable to recover it. 00:33:39.871 [2024-11-27 07:28:51.012480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.012510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.871 qpair failed and we were unable to recover it. 00:33:39.871 [2024-11-27 07:28:51.012886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.012920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.871 qpair failed and we were unable to recover it. 00:33:39.871 [2024-11-27 07:28:51.013283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.013317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.871 qpair failed and we were unable to recover it. 00:33:39.871 [2024-11-27 07:28:51.013683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.013713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.871 qpair failed and we were unable to recover it. 00:33:39.871 [2024-11-27 07:28:51.013980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.014010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.871 qpair failed and we were unable to recover it. 00:33:39.871 [2024-11-27 07:28:51.014418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.014449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.871 qpair failed and we were unable to recover it. 00:33:39.871 [2024-11-27 07:28:51.014707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.014736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.871 qpair failed and we were unable to recover it. 00:33:39.871 [2024-11-27 07:28:51.015741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.015792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.871 qpair failed and we were unable to recover it. 00:33:39.871 [2024-11-27 07:28:51.016095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.016125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.871 qpair failed and we were unable to recover it. 00:33:39.871 [2024-11-27 07:28:51.016521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.016562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.871 qpair failed and we were unable to recover it. 00:33:39.871 [2024-11-27 07:28:51.016942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.016973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.871 qpair failed and we were unable to recover it. 00:33:39.871 [2024-11-27 07:28:51.017333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.017366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.871 qpair failed and we were unable to recover it. 00:33:39.871 [2024-11-27 07:28:51.017737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.017770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.871 qpair failed and we were unable to recover it. 00:33:39.871 [2024-11-27 07:28:51.018130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.018174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.871 qpair failed and we were unable to recover it. 00:33:39.871 [2024-11-27 07:28:51.018561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.018591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.871 qpair failed and we were unable to recover it. 00:33:39.871 [2024-11-27 07:28:51.018952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.018982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.871 qpair failed and we were unable to recover it. 00:33:39.871 [2024-11-27 07:28:51.019261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.019298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.871 qpair failed and we were unable to recover it. 00:33:39.871 [2024-11-27 07:28:51.019683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.019717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.871 qpair failed and we were unable to recover it. 00:33:39.871 [2024-11-27 07:28:51.020095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.020123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.871 qpair failed and we were unable to recover it. 00:33:39.871 [2024-11-27 07:28:51.020523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.020557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.871 qpair failed and we were unable to recover it. 00:33:39.871 [2024-11-27 07:28:51.020895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.020925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.871 qpair failed and we were unable to recover it. 00:33:39.871 [2024-11-27 07:28:51.021279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.021313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.871 qpair failed and we were unable to recover it. 00:33:39.871 [2024-11-27 07:28:51.021754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.871 [2024-11-27 07:28:51.021783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.872 qpair failed and we were unable to recover it. 00:33:39.872 [2024-11-27 07:28:51.022182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.872 [2024-11-27 07:28:51.022214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.872 qpair failed and we were unable to recover it. 00:33:39.872 [2024-11-27 07:28:51.022573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.872 [2024-11-27 07:28:51.022602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.872 qpair failed and we were unable to recover it. 00:33:39.872 [2024-11-27 07:28:51.022959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.872 [2024-11-27 07:28:51.022988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.872 qpair failed and we were unable to recover it. 00:33:39.872 [2024-11-27 07:28:51.023356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.872 [2024-11-27 07:28:51.023390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.872 qpair failed and we were unable to recover it. 00:33:39.872 [2024-11-27 07:28:51.023650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.872 [2024-11-27 07:28:51.023679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.872 qpair failed and we were unable to recover it. 00:33:39.872 [2024-11-27 07:28:51.024040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.872 [2024-11-27 07:28:51.024069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.872 qpair failed and we were unable to recover it. 00:33:39.872 [2024-11-27 07:28:51.024437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.872 [2024-11-27 07:28:51.024469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.872 qpair failed and we were unable to recover it. 00:33:39.872 [2024-11-27 07:28:51.024825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.872 [2024-11-27 07:28:51.024856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.872 qpair failed and we were unable to recover it. 00:33:39.872 [2024-11-27 07:28:51.025106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.872 [2024-11-27 07:28:51.025136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.872 qpair failed and we were unable to recover it. 00:33:39.872 [2024-11-27 07:28:51.025426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.872 [2024-11-27 07:28:51.025457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.872 qpair failed and we were unable to recover it. 00:33:39.872 [2024-11-27 07:28:51.025703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.872 [2024-11-27 07:28:51.025732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.872 qpair failed and we were unable to recover it. 00:33:39.872 [2024-11-27 07:28:51.026093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.872 [2024-11-27 07:28:51.026121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.872 qpair failed and we were unable to recover it. 00:33:39.872 [2024-11-27 07:28:51.026480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.872 [2024-11-27 07:28:51.026513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.872 qpair failed and we were unable to recover it. 00:33:39.872 [2024-11-27 07:28:51.026877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.872 [2024-11-27 07:28:51.026913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.872 qpair failed and we were unable to recover it. 00:33:39.872 [2024-11-27 07:28:51.027273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.872 [2024-11-27 07:28:51.027305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.872 qpair failed and we were unable to recover it. 00:33:39.872 [2024-11-27 07:28:51.027678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.872 [2024-11-27 07:28:51.027708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.872 qpair failed and we were unable to recover it. 00:33:39.872 [2024-11-27 07:28:51.028074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.872 [2024-11-27 07:28:51.028105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.872 qpair failed and we were unable to recover it. 00:33:39.872 [2024-11-27 07:28:51.028506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.872 [2024-11-27 07:28:51.028537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.872 qpair failed and we were unable to recover it. 00:33:39.872 [2024-11-27 07:28:51.028955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.872 [2024-11-27 07:28:51.028988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.872 qpair failed and we were unable to recover it. 00:33:39.872 [2024-11-27 07:28:51.029336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.872 [2024-11-27 07:28:51.029369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.872 qpair failed and we were unable to recover it. 00:33:39.872 [2024-11-27 07:28:51.029725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.872 [2024-11-27 07:28:51.029754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.872 qpair failed and we were unable to recover it. 00:33:39.872 [2024-11-27 07:28:51.030102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.872 [2024-11-27 07:28:51.030131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.872 qpair failed and we were unable to recover it. 00:33:39.872 [2024-11-27 07:28:51.030494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.872 [2024-11-27 07:28:51.030527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.872 qpair failed and we were unable to recover it. 00:33:39.872 [2024-11-27 07:28:51.030878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.872 [2024-11-27 07:28:51.030911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.872 qpair failed and we were unable to recover it. 00:33:39.872 [2024-11-27 07:28:51.031257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.872 [2024-11-27 07:28:51.031287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.872 qpair failed and we were unable to recover it. 00:33:39.872 [2024-11-27 07:28:51.031526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.872 [2024-11-27 07:28:51.031554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.872 qpair failed and we were unable to recover it. 00:33:39.872 [2024-11-27 07:28:51.031922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.872 [2024-11-27 07:28:51.031952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.872 qpair failed and we were unable to recover it. 00:33:39.872 [2024-11-27 07:28:51.032303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.872 [2024-11-27 07:28:51.032335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.872 qpair failed and we were unable to recover it. 00:33:39.872 [2024-11-27 07:28:51.032727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.872 [2024-11-27 07:28:51.032757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.872 qpair failed and we were unable to recover it. 00:33:39.872 [2024-11-27 07:28:51.032908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.872 [2024-11-27 07:28:51.032940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.872 qpair failed and we were unable to recover it. 00:33:39.872 [2024-11-27 07:28:51.033314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.872 [2024-11-27 07:28:51.033346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.872 qpair failed and we were unable to recover it. 00:33:39.872 [2024-11-27 07:28:51.033588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.872 [2024-11-27 07:28:51.033616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.872 qpair failed and we were unable to recover it. 00:33:39.872 [2024-11-27 07:28:51.033997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.872 [2024-11-27 07:28:51.034025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.872 qpair failed and we were unable to recover it. 00:33:39.872 [2024-11-27 07:28:51.034409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.872 [2024-11-27 07:28:51.034440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.872 qpair failed and we were unable to recover it. 00:33:39.872 [2024-11-27 07:28:51.034704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.872 [2024-11-27 07:28:51.034733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.872 qpair failed and we were unable to recover it. 00:33:39.872 [2024-11-27 07:28:51.035083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.872 [2024-11-27 07:28:51.035114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.873 qpair failed and we were unable to recover it. 00:33:39.873 [2024-11-27 07:28:51.035504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.873 [2024-11-27 07:28:51.035536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.873 qpair failed and we were unable to recover it. 00:33:39.873 [2024-11-27 07:28:51.035890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.873 [2024-11-27 07:28:51.035919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.873 qpair failed and we were unable to recover it. 00:33:39.873 [2024-11-27 07:28:51.036281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.873 [2024-11-27 07:28:51.036315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.873 qpair failed and we were unable to recover it. 00:33:39.873 [2024-11-27 07:28:51.036697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.873 [2024-11-27 07:28:51.036727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.873 qpair failed and we were unable to recover it. 00:33:39.873 [2024-11-27 07:28:51.037101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.873 [2024-11-27 07:28:51.037133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.873 qpair failed and we were unable to recover it. 00:33:39.873 [2024-11-27 07:28:51.037507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.873 [2024-11-27 07:28:51.037539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.873 qpair failed and we were unable to recover it. 00:33:39.873 [2024-11-27 07:28:51.037942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.873 [2024-11-27 07:28:51.037972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.873 qpair failed and we were unable to recover it. 00:33:39.873 [2024-11-27 07:28:51.038252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.873 [2024-11-27 07:28:51.038283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.873 qpair failed and we were unable to recover it. 00:33:39.873 [2024-11-27 07:28:51.038658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.873 [2024-11-27 07:28:51.038688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.873 qpair failed and we were unable to recover it. 00:33:39.873 [2024-11-27 07:28:51.038935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.873 [2024-11-27 07:28:51.038968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.873 qpair failed and we were unable to recover it. 00:33:39.873 [2024-11-27 07:28:51.039337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.873 [2024-11-27 07:28:51.039368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.873 qpair failed and we were unable to recover it. 00:33:39.873 [2024-11-27 07:28:51.039750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.873 [2024-11-27 07:28:51.039782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.873 qpair failed and we were unable to recover it. 00:33:39.873 [2024-11-27 07:28:51.040131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.873 [2024-11-27 07:28:51.040173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.873 qpair failed and we were unable to recover it. 00:33:39.873 [2024-11-27 07:28:51.040531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.873 [2024-11-27 07:28:51.040561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.873 qpair failed and we were unable to recover it. 00:33:39.873 [2024-11-27 07:28:51.040923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.873 [2024-11-27 07:28:51.040952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.873 qpair failed and we were unable to recover it. 00:33:39.873 [2024-11-27 07:28:51.041328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.873 [2024-11-27 07:28:51.041358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.873 qpair failed and we were unable to recover it. 00:33:39.873 [2024-11-27 07:28:51.041788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.873 [2024-11-27 07:28:51.041819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.873 qpair failed and we were unable to recover it. 00:33:39.873 [2024-11-27 07:28:51.042175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.873 [2024-11-27 07:28:51.042206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.873 qpair failed and we were unable to recover it. 00:33:39.873 [2024-11-27 07:28:51.042605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.873 [2024-11-27 07:28:51.042638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.873 qpair failed and we were unable to recover it. 00:33:39.873 [2024-11-27 07:28:51.042980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.873 [2024-11-27 07:28:51.043010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.873 qpair failed and we were unable to recover it. 00:33:39.873 [2024-11-27 07:28:51.043364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.873 [2024-11-27 07:28:51.043398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.873 qpair failed and we were unable to recover it. 00:33:39.873 [2024-11-27 07:28:51.043795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.873 [2024-11-27 07:28:51.043825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.873 qpair failed and we were unable to recover it. 00:33:39.873 [2024-11-27 07:28:51.044185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.873 [2024-11-27 07:28:51.044215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.873 qpair failed and we were unable to recover it. 00:33:39.873 [2024-11-27 07:28:51.044580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.873 [2024-11-27 07:28:51.044613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.873 qpair failed and we were unable to recover it. 00:33:39.873 [2024-11-27 07:28:51.044865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.873 [2024-11-27 07:28:51.044899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.873 qpair failed and we were unable to recover it. 00:33:39.873 [2024-11-27 07:28:51.045280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.873 [2024-11-27 07:28:51.045310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.873 qpair failed and we were unable to recover it. 00:33:39.873 [2024-11-27 07:28:51.045673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.873 [2024-11-27 07:28:51.045705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.873 qpair failed and we were unable to recover it. 00:33:39.873 [2024-11-27 07:28:51.046065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.873 [2024-11-27 07:28:51.046093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:39.873 qpair failed and we were unable to recover it. 00:33:39.873 [2024-11-27 07:28:51.046455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.146 [2024-11-27 07:28:51.046487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.146 qpair failed and we were unable to recover it. 00:33:40.146 [2024-11-27 07:28:51.046844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.146 [2024-11-27 07:28:51.046876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.146 qpair failed and we were unable to recover it. 00:33:40.146 [2024-11-27 07:28:51.047246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.146 [2024-11-27 07:28:51.047275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.146 qpair failed and we were unable to recover it. 00:33:40.146 [2024-11-27 07:28:51.047638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.146 [2024-11-27 07:28:51.047666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.146 qpair failed and we were unable to recover it. 00:33:40.146 [2024-11-27 07:28:51.048018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.146 [2024-11-27 07:28:51.048046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.146 qpair failed and we were unable to recover it. 00:33:40.146 [2024-11-27 07:28:51.048404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.146 [2024-11-27 07:28:51.048436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.146 qpair failed and we were unable to recover it. 00:33:40.146 [2024-11-27 07:28:51.048683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.146 [2024-11-27 07:28:51.048718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.146 qpair failed and we were unable to recover it. 00:33:40.146 [2024-11-27 07:28:51.049113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.146 [2024-11-27 07:28:51.049146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.146 qpair failed and we were unable to recover it. 00:33:40.146 [2024-11-27 07:28:51.049540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.146 [2024-11-27 07:28:51.049568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.146 qpair failed and we were unable to recover it. 00:33:40.146 [2024-11-27 07:28:51.049971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.146 [2024-11-27 07:28:51.050001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.146 qpair failed and we were unable to recover it. 00:33:40.146 [2024-11-27 07:28:51.050340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.050370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.147 qpair failed and we were unable to recover it. 00:33:40.147 [2024-11-27 07:28:51.050711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.050741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.147 qpair failed and we were unable to recover it. 00:33:40.147 [2024-11-27 07:28:51.051096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.051127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.147 qpair failed and we were unable to recover it. 00:33:40.147 [2024-11-27 07:28:51.051589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.051619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.147 qpair failed and we were unable to recover it. 00:33:40.147 [2024-11-27 07:28:51.051970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.052000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.147 qpair failed and we were unable to recover it. 00:33:40.147 [2024-11-27 07:28:51.052366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.052398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.147 qpair failed and we were unable to recover it. 00:33:40.147 [2024-11-27 07:28:51.052754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.052786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.147 qpair failed and we were unable to recover it. 00:33:40.147 [2024-11-27 07:28:51.053150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.053219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.147 qpair failed and we were unable to recover it. 00:33:40.147 [2024-11-27 07:28:51.053569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.053600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.147 qpair failed and we were unable to recover it. 00:33:40.147 [2024-11-27 07:28:51.053838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.053872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.147 qpair failed and we were unable to recover it. 00:33:40.147 [2024-11-27 07:28:51.054227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.054260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.147 qpair failed and we were unable to recover it. 00:33:40.147 [2024-11-27 07:28:51.054647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.054677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.147 qpair failed and we were unable to recover it. 00:33:40.147 [2024-11-27 07:28:51.055040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.055072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.147 qpair failed and we were unable to recover it. 00:33:40.147 [2024-11-27 07:28:51.055468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.055499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.147 qpair failed and we were unable to recover it. 00:33:40.147 [2024-11-27 07:28:51.055852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.055883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.147 qpair failed and we were unable to recover it. 00:33:40.147 [2024-11-27 07:28:51.056241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.056272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.147 qpair failed and we were unable to recover it. 00:33:40.147 [2024-11-27 07:28:51.056664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.056696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.147 qpair failed and we were unable to recover it. 00:33:40.147 [2024-11-27 07:28:51.056960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.056991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.147 qpair failed and we were unable to recover it. 00:33:40.147 [2024-11-27 07:28:51.057357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.057390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.147 qpair failed and we were unable to recover it. 00:33:40.147 [2024-11-27 07:28:51.057775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.057806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.147 qpair failed and we were unable to recover it. 00:33:40.147 [2024-11-27 07:28:51.058195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.058227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.147 qpair failed and we were unable to recover it. 00:33:40.147 [2024-11-27 07:28:51.058589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.058621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.147 qpair failed and we were unable to recover it. 00:33:40.147 [2024-11-27 07:28:51.058977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.059008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.147 qpair failed and we were unable to recover it. 00:33:40.147 [2024-11-27 07:28:51.059400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.059433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.147 qpair failed and we were unable to recover it. 00:33:40.147 [2024-11-27 07:28:51.059784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.059815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.147 qpair failed and we were unable to recover it. 00:33:40.147 [2024-11-27 07:28:51.060182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.060215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.147 qpair failed and we were unable to recover it. 00:33:40.147 [2024-11-27 07:28:51.060465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.060496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.147 qpair failed and we were unable to recover it. 00:33:40.147 [2024-11-27 07:28:51.060874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.060904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.147 qpair failed and we were unable to recover it. 00:33:40.147 [2024-11-27 07:28:51.061239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.061270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.147 qpair failed and we were unable to recover it. 00:33:40.147 [2024-11-27 07:28:51.061654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.061684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.147 qpair failed and we were unable to recover it. 00:33:40.147 [2024-11-27 07:28:51.062045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.062075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.147 qpair failed and we were unable to recover it. 00:33:40.147 [2024-11-27 07:28:51.062443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.062474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.147 qpair failed and we were unable to recover it. 00:33:40.147 [2024-11-27 07:28:51.062905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.062935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.147 qpair failed and we were unable to recover it. 00:33:40.147 [2024-11-27 07:28:51.063221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.063252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.147 qpair failed and we were unable to recover it. 00:33:40.147 [2024-11-27 07:28:51.063668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.063704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.147 qpair failed and we were unable to recover it. 00:33:40.147 [2024-11-27 07:28:51.064067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.064099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.147 qpair failed and we were unable to recover it. 00:33:40.147 [2024-11-27 07:28:51.064470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.064501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.147 qpair failed and we were unable to recover it. 00:33:40.147 [2024-11-27 07:28:51.064840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.064871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.147 qpair failed and we were unable to recover it. 00:33:40.147 [2024-11-27 07:28:51.065225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.147 [2024-11-27 07:28:51.065258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.148 [2024-11-27 07:28:51.065622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.148 [2024-11-27 07:28:51.065653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.148 [2024-11-27 07:28:51.065888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.148 [2024-11-27 07:28:51.065917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.148 [2024-11-27 07:28:51.066281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.148 [2024-11-27 07:28:51.066311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.148 [2024-11-27 07:28:51.066685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.148 [2024-11-27 07:28:51.066714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.148 [2024-11-27 07:28:51.067093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.148 [2024-11-27 07:28:51.067123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.148 [2024-11-27 07:28:51.067540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.148 [2024-11-27 07:28:51.067572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.148 [2024-11-27 07:28:51.067939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.148 [2024-11-27 07:28:51.067968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.148 [2024-11-27 07:28:51.068335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.148 [2024-11-27 07:28:51.068366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.148 [2024-11-27 07:28:51.068726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.148 [2024-11-27 07:28:51.068756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.148 [2024-11-27 07:28:51.069117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.148 [2024-11-27 07:28:51.069146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.148 [2024-11-27 07:28:51.069416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.148 [2024-11-27 07:28:51.069447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.148 [2024-11-27 07:28:51.069710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.148 [2024-11-27 07:28:51.069739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.148 [2024-11-27 07:28:51.070095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.148 [2024-11-27 07:28:51.070123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.148 [2024-11-27 07:28:51.070468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.148 [2024-11-27 07:28:51.070499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.148 [2024-11-27 07:28:51.070834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.148 [2024-11-27 07:28:51.070862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.148 [2024-11-27 07:28:51.071206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.148 [2024-11-27 07:28:51.071236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.148 [2024-11-27 07:28:51.071527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.148 [2024-11-27 07:28:51.071555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.148 [2024-11-27 07:28:51.071911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.148 [2024-11-27 07:28:51.071940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.148 [2024-11-27 07:28:51.072216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.148 [2024-11-27 07:28:51.072246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.148 [2024-11-27 07:28:51.072629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.148 [2024-11-27 07:28:51.072657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.148 [2024-11-27 07:28:51.073020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.148 [2024-11-27 07:28:51.073049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.148 [2024-11-27 07:28:51.073418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.148 [2024-11-27 07:28:51.073447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.148 [2024-11-27 07:28:51.073789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.148 [2024-11-27 07:28:51.073818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.148 [2024-11-27 07:28:51.074189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.148 [2024-11-27 07:28:51.074220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.148 [2024-11-27 07:28:51.074597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.148 [2024-11-27 07:28:51.074626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.148 [2024-11-27 07:28:51.074878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.148 [2024-11-27 07:28:51.074911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.148 [2024-11-27 07:28:51.075281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.148 [2024-11-27 07:28:51.075311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.148 [2024-11-27 07:28:51.075672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.148 [2024-11-27 07:28:51.075701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.148 [2024-11-27 07:28:51.076063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.148 [2024-11-27 07:28:51.076091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.148 [2024-11-27 07:28:51.076458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.148 [2024-11-27 07:28:51.076488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.148 [2024-11-27 07:28:51.076850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.148 [2024-11-27 07:28:51.076879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.148 [2024-11-27 07:28:51.077143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.148 [2024-11-27 07:28:51.077187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.148 [2024-11-27 07:28:51.077536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.148 [2024-11-27 07:28:51.077566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.148 [2024-11-27 07:28:51.077947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.148 [2024-11-27 07:28:51.077976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.148 [2024-11-27 07:28:51.078332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.148 [2024-11-27 07:28:51.078362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.148 [2024-11-27 07:28:51.078727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.148 [2024-11-27 07:28:51.078756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.148 [2024-11-27 07:28:51.079121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.148 [2024-11-27 07:28:51.079151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.148 [2024-11-27 07:28:51.079525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.148 [2024-11-27 07:28:51.079554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.148 qpair failed and we were unable to recover it. 00:33:40.149 [2024-11-27 07:28:51.079911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.149 [2024-11-27 07:28:51.079940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.149 qpair failed and we were unable to recover it. 00:33:40.149 [2024-11-27 07:28:51.080296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.149 [2024-11-27 07:28:51.080327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.149 qpair failed and we were unable to recover it. 00:33:40.149 [2024-11-27 07:28:51.080658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.149 [2024-11-27 07:28:51.080687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.149 qpair failed and we were unable to recover it. 00:33:40.149 [2024-11-27 07:28:51.080931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.149 [2024-11-27 07:28:51.080959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.149 qpair failed and we were unable to recover it. 00:33:40.149 [2024-11-27 07:28:51.081207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.149 [2024-11-27 07:28:51.081237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.149 qpair failed and we were unable to recover it. 00:33:40.149 [2024-11-27 07:28:51.081652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.149 [2024-11-27 07:28:51.081680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.149 qpair failed and we were unable to recover it. 00:33:40.149 [2024-11-27 07:28:51.081925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.149 [2024-11-27 07:28:51.081957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.149 qpair failed and we were unable to recover it. 00:33:40.149 [2024-11-27 07:28:51.082216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.149 [2024-11-27 07:28:51.082246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.149 qpair failed and we were unable to recover it. 00:33:40.149 [2024-11-27 07:28:51.082603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.149 [2024-11-27 07:28:51.082632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.149 qpair failed and we were unable to recover it. 00:33:40.149 [2024-11-27 07:28:51.082986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.149 [2024-11-27 07:28:51.083016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.149 qpair failed and we were unable to recover it. 00:33:40.149 [2024-11-27 07:28:51.083263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.149 [2024-11-27 07:28:51.083293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.149 qpair failed and we were unable to recover it. 00:33:40.149 [2024-11-27 07:28:51.083651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.149 [2024-11-27 07:28:51.083680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.149 qpair failed and we were unable to recover it. 00:33:40.149 [2024-11-27 07:28:51.084055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.149 [2024-11-27 07:28:51.084085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.149 qpair failed and we were unable to recover it. 00:33:40.149 [2024-11-27 07:28:51.084522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.149 [2024-11-27 07:28:51.084552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.149 qpair failed and we were unable to recover it. 00:33:40.149 [2024-11-27 07:28:51.084918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.149 [2024-11-27 07:28:51.084947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.149 qpair failed and we were unable to recover it. 00:33:40.149 [2024-11-27 07:28:51.085309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.149 [2024-11-27 07:28:51.085339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.149 qpair failed and we were unable to recover it. 00:33:40.149 [2024-11-27 07:28:51.085706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.149 [2024-11-27 07:28:51.085735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.149 qpair failed and we were unable to recover it. 00:33:40.149 [2024-11-27 07:28:51.086103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.149 [2024-11-27 07:28:51.086131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.149 qpair failed and we were unable to recover it. 00:33:40.149 [2024-11-27 07:28:51.086554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.149 [2024-11-27 07:28:51.086584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.149 qpair failed and we were unable to recover it. 00:33:40.149 [2024-11-27 07:28:51.086840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.149 [2024-11-27 07:28:51.086872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.149 qpair failed and we were unable to recover it. 00:33:40.149 [2024-11-27 07:28:51.087228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.149 [2024-11-27 07:28:51.087259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.149 qpair failed and we were unable to recover it. 00:33:40.149 [2024-11-27 07:28:51.087585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.149 [2024-11-27 07:28:51.087614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.149 qpair failed and we were unable to recover it. 00:33:40.149 [2024-11-27 07:28:51.087988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.149 [2024-11-27 07:28:51.088018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.149 qpair failed and we were unable to recover it. 00:33:40.149 [2024-11-27 07:28:51.088287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.149 [2024-11-27 07:28:51.088317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.149 qpair failed and we were unable to recover it. 00:33:40.149 [2024-11-27 07:28:51.088699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.149 [2024-11-27 07:28:51.088728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.149 qpair failed and we were unable to recover it. 00:33:40.149 [2024-11-27 07:28:51.089092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.149 [2024-11-27 07:28:51.089135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.149 qpair failed and we were unable to recover it. 00:33:40.149 [2024-11-27 07:28:51.089500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.149 [2024-11-27 07:28:51.089530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.149 qpair failed and we were unable to recover it. 00:33:40.149 [2024-11-27 07:28:51.089932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.149 [2024-11-27 07:28:51.089962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.149 qpair failed and we were unable to recover it. 00:33:40.149 [2024-11-27 07:28:51.090326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.149 [2024-11-27 07:28:51.090358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.149 qpair failed and we were unable to recover it. 00:33:40.149 [2024-11-27 07:28:51.090717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.149 [2024-11-27 07:28:51.090747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.149 qpair failed and we were unable to recover it. 00:33:40.149 [2024-11-27 07:28:51.091116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.149 [2024-11-27 07:28:51.091146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.149 qpair failed and we were unable to recover it. 00:33:40.150 [2024-11-27 07:28:51.091594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.150 [2024-11-27 07:28:51.091623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.150 qpair failed and we were unable to recover it. 00:33:40.150 [2024-11-27 07:28:51.091953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.150 [2024-11-27 07:28:51.091982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.150 qpair failed and we were unable to recover it. 00:33:40.150 [2024-11-27 07:28:51.092229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.150 [2024-11-27 07:28:51.092260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.150 qpair failed and we were unable to recover it. 00:33:40.150 [2024-11-27 07:28:51.092660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.150 [2024-11-27 07:28:51.092689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.150 qpair failed and we were unable to recover it. 00:33:40.150 [2024-11-27 07:28:51.093033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.150 [2024-11-27 07:28:51.093062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.150 qpair failed and we were unable to recover it. 00:33:40.150 [2024-11-27 07:28:51.093461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.150 [2024-11-27 07:28:51.093491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.150 qpair failed and we were unable to recover it. 00:33:40.150 [2024-11-27 07:28:51.093889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.150 [2024-11-27 07:28:51.093918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.150 qpair failed and we were unable to recover it. 00:33:40.150 [2024-11-27 07:28:51.094260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.150 [2024-11-27 07:28:51.094289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.150 qpair failed and we were unable to recover it. 00:33:40.150 [2024-11-27 07:28:51.094659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.150 [2024-11-27 07:28:51.094689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.150 qpair failed and we were unable to recover it. 00:33:40.150 [2024-11-27 07:28:51.095052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.150 [2024-11-27 07:28:51.095083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.150 qpair failed and we were unable to recover it. 00:33:40.150 [2024-11-27 07:28:51.095424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.150 [2024-11-27 07:28:51.095454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.150 qpair failed and we were unable to recover it. 00:33:40.150 [2024-11-27 07:28:51.095810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.150 [2024-11-27 07:28:51.095839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.150 qpair failed and we were unable to recover it. 00:33:40.150 [2024-11-27 07:28:51.096204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.150 [2024-11-27 07:28:51.096235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.150 qpair failed and we were unable to recover it. 00:33:40.150 [2024-11-27 07:28:51.096592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.150 [2024-11-27 07:28:51.096621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.150 qpair failed and we were unable to recover it. 00:33:40.150 [2024-11-27 07:28:51.096997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.150 [2024-11-27 07:28:51.097026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.150 qpair failed and we were unable to recover it. 00:33:40.150 [2024-11-27 07:28:51.097434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.150 [2024-11-27 07:28:51.097465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.150 qpair failed and we were unable to recover it. 00:33:40.150 [2024-11-27 07:28:51.097829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.150 [2024-11-27 07:28:51.097858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.150 qpair failed and we were unable to recover it. 00:33:40.150 [2024-11-27 07:28:51.098227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.150 [2024-11-27 07:28:51.098257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.150 qpair failed and we were unable to recover it. 00:33:40.150 [2024-11-27 07:28:51.098529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.150 [2024-11-27 07:28:51.098557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.150 qpair failed and we were unable to recover it. 00:33:40.150 [2024-11-27 07:28:51.099011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.150 [2024-11-27 07:28:51.099040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.150 qpair failed and we were unable to recover it. 00:33:40.150 [2024-11-27 07:28:51.099446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.150 [2024-11-27 07:28:51.099476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.150 qpair failed and we were unable to recover it. 00:33:40.150 [2024-11-27 07:28:51.099816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.150 [2024-11-27 07:28:51.099849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.150 qpair failed and we were unable to recover it. 00:33:40.150 [2024-11-27 07:28:51.100209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.150 [2024-11-27 07:28:51.100240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.150 qpair failed and we were unable to recover it. 00:33:40.150 [2024-11-27 07:28:51.100506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.150 [2024-11-27 07:28:51.100539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.150 qpair failed and we were unable to recover it. 00:33:40.150 [2024-11-27 07:28:51.100987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.150 [2024-11-27 07:28:51.101015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.150 qpair failed and we were unable to recover it. 00:33:40.150 [2024-11-27 07:28:51.101400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.150 [2024-11-27 07:28:51.101430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.150 qpair failed and we were unable to recover it. 00:33:40.151 [2024-11-27 07:28:51.101787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.151 [2024-11-27 07:28:51.101815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.151 qpair failed and we were unable to recover it. 00:33:40.151 [2024-11-27 07:28:51.102198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.151 [2024-11-27 07:28:51.102228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.151 qpair failed and we were unable to recover it. 00:33:40.151 [2024-11-27 07:28:51.102482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.151 [2024-11-27 07:28:51.102510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.151 qpair failed and we were unable to recover it. 00:33:40.151 [2024-11-27 07:28:51.102898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.151 [2024-11-27 07:28:51.102927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.151 qpair failed and we were unable to recover it. 00:33:40.151 [2024-11-27 07:28:51.103237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.151 [2024-11-27 07:28:51.103266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.151 qpair failed and we were unable to recover it. 00:33:40.151 [2024-11-27 07:28:51.103611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.151 [2024-11-27 07:28:51.103639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.151 qpair failed and we were unable to recover it. 00:33:40.151 [2024-11-27 07:28:51.104003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.151 [2024-11-27 07:28:51.104031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.151 qpair failed and we were unable to recover it. 00:33:40.151 [2024-11-27 07:28:51.104382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.151 [2024-11-27 07:28:51.104412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.151 qpair failed and we were unable to recover it. 00:33:40.151 [2024-11-27 07:28:51.104770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.151 [2024-11-27 07:28:51.104799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.151 qpair failed and we were unable to recover it. 00:33:40.151 [2024-11-27 07:28:51.105072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.151 [2024-11-27 07:28:51.105101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.151 qpair failed and we were unable to recover it. 00:33:40.151 [2024-11-27 07:28:51.105493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.151 [2024-11-27 07:28:51.105522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.151 qpair failed and we were unable to recover it. 00:33:40.151 [2024-11-27 07:28:51.105783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.151 [2024-11-27 07:28:51.105812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.151 qpair failed and we were unable to recover it. 00:33:40.151 [2024-11-27 07:28:51.106177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.151 [2024-11-27 07:28:51.106208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.151 qpair failed and we were unable to recover it. 00:33:40.151 [2024-11-27 07:28:51.106604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.151 [2024-11-27 07:28:51.106633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.151 qpair failed and we were unable to recover it. 00:33:40.151 [2024-11-27 07:28:51.106996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.151 [2024-11-27 07:28:51.107024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.151 qpair failed and we were unable to recover it. 00:33:40.151 [2024-11-27 07:28:51.107392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.151 [2024-11-27 07:28:51.107422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.151 qpair failed and we were unable to recover it. 00:33:40.151 [2024-11-27 07:28:51.107780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.151 [2024-11-27 07:28:51.107809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.151 qpair failed and we were unable to recover it. 00:33:40.151 [2024-11-27 07:28:51.108179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.151 [2024-11-27 07:28:51.108209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.151 qpair failed and we were unable to recover it. 00:33:40.151 [2024-11-27 07:28:51.108548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.151 [2024-11-27 07:28:51.108578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.151 qpair failed and we were unable to recover it. 00:33:40.151 [2024-11-27 07:28:51.108979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.152 [2024-11-27 07:28:51.109007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.152 qpair failed and we were unable to recover it. 00:33:40.152 [2024-11-27 07:28:51.109337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.152 [2024-11-27 07:28:51.109368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.152 qpair failed and we were unable to recover it. 00:33:40.152 [2024-11-27 07:28:51.109749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.152 [2024-11-27 07:28:51.109778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.152 qpair failed and we were unable to recover it. 00:33:40.152 [2024-11-27 07:28:51.110145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.152 [2024-11-27 07:28:51.110202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.152 qpair failed and we were unable to recover it. 00:33:40.152 [2024-11-27 07:28:51.110550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.152 [2024-11-27 07:28:51.110579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.152 qpair failed and we were unable to recover it. 00:33:40.152 [2024-11-27 07:28:51.111024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.152 [2024-11-27 07:28:51.111054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.152 qpair failed and we were unable to recover it. 00:33:40.152 [2024-11-27 07:28:51.111404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.152 [2024-11-27 07:28:51.111436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.152 qpair failed and we were unable to recover it. 00:33:40.152 [2024-11-27 07:28:51.111785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.152 [2024-11-27 07:28:51.111813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.152 qpair failed and we were unable to recover it. 00:33:40.152 [2024-11-27 07:28:51.112179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.152 [2024-11-27 07:28:51.112210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.152 qpair failed and we were unable to recover it. 00:33:40.152 [2024-11-27 07:28:51.112570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.152 [2024-11-27 07:28:51.112599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.152 qpair failed and we were unable to recover it. 00:33:40.152 [2024-11-27 07:28:51.112964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.152 [2024-11-27 07:28:51.112992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.153 qpair failed and we were unable to recover it. 00:33:40.153 [2024-11-27 07:28:51.113340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.153 [2024-11-27 07:28:51.113370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.154 qpair failed and we were unable to recover it. 00:33:40.154 [2024-11-27 07:28:51.113730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.154 [2024-11-27 07:28:51.113759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.154 qpair failed and we were unable to recover it. 00:33:40.154 [2024-11-27 07:28:51.114120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.154 [2024-11-27 07:28:51.114148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.154 qpair failed and we were unable to recover it. 00:33:40.154 [2024-11-27 07:28:51.114429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.154 [2024-11-27 07:28:51.114458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.154 qpair failed and we were unable to recover it. 00:33:40.154 [2024-11-27 07:28:51.114832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.154 [2024-11-27 07:28:51.114861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.154 qpair failed and we were unable to recover it. 00:33:40.154 [2024-11-27 07:28:51.115221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.154 [2024-11-27 07:28:51.115252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.154 qpair failed and we were unable to recover it. 00:33:40.154 [2024-11-27 07:28:51.115638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.154 [2024-11-27 07:28:51.115667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.154 qpair failed and we were unable to recover it. 00:33:40.154 [2024-11-27 07:28:51.116027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.154 [2024-11-27 07:28:51.116056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.154 qpair failed and we were unable to recover it. 00:33:40.154 [2024-11-27 07:28:51.116418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.154 [2024-11-27 07:28:51.116448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.154 qpair failed and we were unable to recover it. 00:33:40.154 [2024-11-27 07:28:51.116814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.154 [2024-11-27 07:28:51.116844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.154 qpair failed and we were unable to recover it. 00:33:40.154 [2024-11-27 07:28:51.117294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.154 [2024-11-27 07:28:51.117324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.154 qpair failed and we were unable to recover it. 00:33:40.154 [2024-11-27 07:28:51.117685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.154 [2024-11-27 07:28:51.117713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.154 qpair failed and we were unable to recover it. 00:33:40.154 [2024-11-27 07:28:51.118076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.154 [2024-11-27 07:28:51.118104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.154 qpair failed and we were unable to recover it. 00:33:40.154 [2024-11-27 07:28:51.118447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.154 [2024-11-27 07:28:51.118477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.154 qpair failed and we were unable to recover it. 00:33:40.154 [2024-11-27 07:28:51.118828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.154 [2024-11-27 07:28:51.118857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.154 qpair failed and we were unable to recover it. 00:33:40.154 [2024-11-27 07:28:51.119227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.154 [2024-11-27 07:28:51.119257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.154 qpair failed and we were unable to recover it. 00:33:40.154 [2024-11-27 07:28:51.119607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.154 [2024-11-27 07:28:51.119636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.154 qpair failed and we were unable to recover it. 00:33:40.154 [2024-11-27 07:28:51.120001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.154 [2024-11-27 07:28:51.120029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.154 qpair failed and we were unable to recover it. 00:33:40.154 [2024-11-27 07:28:51.120398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.155 [2024-11-27 07:28:51.120427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.155 qpair failed and we were unable to recover it. 00:33:40.155 [2024-11-27 07:28:51.120768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.155 [2024-11-27 07:28:51.120797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.155 qpair failed and we were unable to recover it. 00:33:40.155 [2024-11-27 07:28:51.121174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.155 [2024-11-27 07:28:51.121205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.155 qpair failed and we were unable to recover it. 00:33:40.155 [2024-11-27 07:28:51.121563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.155 [2024-11-27 07:28:51.121592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.155 qpair failed and we were unable to recover it. 00:33:40.155 [2024-11-27 07:28:51.121932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.155 [2024-11-27 07:28:51.121961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.155 qpair failed and we were unable to recover it. 00:33:40.155 [2024-11-27 07:28:51.122337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.155 [2024-11-27 07:28:51.122368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.155 qpair failed and we were unable to recover it. 00:33:40.155 [2024-11-27 07:28:51.122719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.155 [2024-11-27 07:28:51.122746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.155 qpair failed and we were unable to recover it. 00:33:40.155 [2024-11-27 07:28:51.123127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.155 [2024-11-27 07:28:51.123155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.155 qpair failed and we were unable to recover it. 00:33:40.155 [2024-11-27 07:28:51.123420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.155 [2024-11-27 07:28:51.123449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.155 qpair failed and we were unable to recover it. 00:33:40.155 [2024-11-27 07:28:51.123801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.155 [2024-11-27 07:28:51.123829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.155 qpair failed and we were unable to recover it. 00:33:40.155 [2024-11-27 07:28:51.124192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.155 [2024-11-27 07:28:51.124222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.155 qpair failed and we were unable to recover it. 00:33:40.155 [2024-11-27 07:28:51.124621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.155 [2024-11-27 07:28:51.124649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.155 qpair failed and we were unable to recover it. 00:33:40.155 [2024-11-27 07:28:51.125019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.155 [2024-11-27 07:28:51.125047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.155 qpair failed and we were unable to recover it. 00:33:40.155 [2024-11-27 07:28:51.125412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.155 [2024-11-27 07:28:51.125450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.155 qpair failed and we were unable to recover it. 00:33:40.155 [2024-11-27 07:28:51.125815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.155 [2024-11-27 07:28:51.125844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.155 qpair failed and we were unable to recover it. 00:33:40.155 [2024-11-27 07:28:51.126211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.155 [2024-11-27 07:28:51.126242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.155 qpair failed and we were unable to recover it. 00:33:40.155 [2024-11-27 07:28:51.126495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.155 [2024-11-27 07:28:51.126523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.155 qpair failed and we were unable to recover it. 00:33:40.155 [2024-11-27 07:28:51.126772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.155 [2024-11-27 07:28:51.126804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.155 qpair failed and we were unable to recover it. 00:33:40.155 [2024-11-27 07:28:51.127169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.155 [2024-11-27 07:28:51.127199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.156 qpair failed and we were unable to recover it. 00:33:40.156 [2024-11-27 07:28:51.127600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.156 [2024-11-27 07:28:51.127629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.156 qpair failed and we were unable to recover it. 00:33:40.156 [2024-11-27 07:28:51.127996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.156 [2024-11-27 07:28:51.128024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.156 qpair failed and we were unable to recover it. 00:33:40.156 [2024-11-27 07:28:51.128405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.156 [2024-11-27 07:28:51.128434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.156 qpair failed and we were unable to recover it. 00:33:40.156 [2024-11-27 07:28:51.128682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.156 [2024-11-27 07:28:51.128711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.156 qpair failed and we were unable to recover it. 00:33:40.156 [2024-11-27 07:28:51.129041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.156 [2024-11-27 07:28:51.129076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.156 qpair failed and we were unable to recover it. 00:33:40.156 [2024-11-27 07:28:51.129426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.156 [2024-11-27 07:28:51.129455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.156 qpair failed and we were unable to recover it. 00:33:40.156 [2024-11-27 07:28:51.129821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.156 [2024-11-27 07:28:51.129850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.156 qpair failed and we were unable to recover it. 00:33:40.156 [2024-11-27 07:28:51.130221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.156 [2024-11-27 07:28:51.130252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.156 qpair failed and we were unable to recover it. 00:33:40.156 [2024-11-27 07:28:51.130512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.156 [2024-11-27 07:28:51.130541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.156 qpair failed and we were unable to recover it. 00:33:40.156 [2024-11-27 07:28:51.130901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.156 [2024-11-27 07:28:51.130931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.156 qpair failed and we were unable to recover it. 00:33:40.156 [2024-11-27 07:28:51.131309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.156 [2024-11-27 07:28:51.131340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.156 qpair failed and we were unable to recover it. 00:33:40.156 [2024-11-27 07:28:51.131685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.156 [2024-11-27 07:28:51.131715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.156 qpair failed and we were unable to recover it. 00:33:40.156 [2024-11-27 07:28:51.131968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.156 [2024-11-27 07:28:51.131997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.156 qpair failed and we were unable to recover it. 00:33:40.156 [2024-11-27 07:28:51.132356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.156 [2024-11-27 07:28:51.132386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.156 qpair failed and we were unable to recover it. 00:33:40.156 [2024-11-27 07:28:51.132791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.156 [2024-11-27 07:28:51.132820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.156 qpair failed and we were unable to recover it. 00:33:40.156 [2024-11-27 07:28:51.133264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.156 [2024-11-27 07:28:51.133294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.156 qpair failed and we were unable to recover it. 00:33:40.156 [2024-11-27 07:28:51.133715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.156 [2024-11-27 07:28:51.133743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.156 qpair failed and we were unable to recover it. 00:33:40.156 [2024-11-27 07:28:51.134104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.156 [2024-11-27 07:28:51.134134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.157 qpair failed and we were unable to recover it. 00:33:40.157 [2024-11-27 07:28:51.134558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.157 [2024-11-27 07:28:51.134588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.157 qpair failed and we were unable to recover it. 00:33:40.157 [2024-11-27 07:28:51.134829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.157 [2024-11-27 07:28:51.134861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.157 qpair failed and we were unable to recover it. 00:33:40.157 [2024-11-27 07:28:51.135221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.157 [2024-11-27 07:28:51.135251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.157 qpair failed and we were unable to recover it. 00:33:40.157 [2024-11-27 07:28:51.135617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.157 [2024-11-27 07:28:51.135646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.157 qpair failed and we were unable to recover it. 00:33:40.157 [2024-11-27 07:28:51.136013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.157 [2024-11-27 07:28:51.136042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.157 qpair failed and we were unable to recover it. 00:33:40.157 [2024-11-27 07:28:51.136295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.157 [2024-11-27 07:28:51.136330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.157 qpair failed and we were unable to recover it. 00:33:40.157 [2024-11-27 07:28:51.136693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.157 [2024-11-27 07:28:51.136722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.157 qpair failed and we were unable to recover it. 00:33:40.157 [2024-11-27 07:28:51.137031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.157 [2024-11-27 07:28:51.137070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.157 qpair failed and we were unable to recover it. 00:33:40.157 [2024-11-27 07:28:51.137417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.157 [2024-11-27 07:28:51.137447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.157 qpair failed and we were unable to recover it. 00:33:40.157 [2024-11-27 07:28:51.137807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.157 [2024-11-27 07:28:51.137836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.157 qpair failed and we were unable to recover it. 00:33:40.157 [2024-11-27 07:28:51.138198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.157 [2024-11-27 07:28:51.138228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.157 qpair failed and we were unable to recover it. 00:33:40.157 [2024-11-27 07:28:51.138601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.157 [2024-11-27 07:28:51.138630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.157 qpair failed and we were unable to recover it. 00:33:40.157 [2024-11-27 07:28:51.138991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.157 [2024-11-27 07:28:51.139020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.157 qpair failed and we were unable to recover it. 00:33:40.157 [2024-11-27 07:28:51.139385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.157 [2024-11-27 07:28:51.139416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.157 qpair failed and we were unable to recover it. 00:33:40.157 [2024-11-27 07:28:51.139773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.157 [2024-11-27 07:28:51.139803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.157 qpair failed and we were unable to recover it. 00:33:40.157 [2024-11-27 07:28:51.140179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.157 [2024-11-27 07:28:51.140210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.157 qpair failed and we were unable to recover it. 00:33:40.157 [2024-11-27 07:28:51.140541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.157 [2024-11-27 07:28:51.140570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.157 qpair failed and we were unable to recover it. 00:33:40.157 [2024-11-27 07:28:51.140924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.157 [2024-11-27 07:28:51.140952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.157 qpair failed and we were unable to recover it. 00:33:40.157 [2024-11-27 07:28:51.141287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.157 [2024-11-27 07:28:51.141316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.158 qpair failed and we were unable to recover it. 00:33:40.158 [2024-11-27 07:28:51.141672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.158 [2024-11-27 07:28:51.141702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.158 qpair failed and we were unable to recover it. 00:33:40.158 [2024-11-27 07:28:51.142060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.158 [2024-11-27 07:28:51.142088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.158 qpair failed and we were unable to recover it. 00:33:40.158 [2024-11-27 07:28:51.142449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.158 [2024-11-27 07:28:51.142479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.158 qpair failed and we were unable to recover it. 00:33:40.158 [2024-11-27 07:28:51.142784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.158 [2024-11-27 07:28:51.142813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.158 qpair failed and we were unable to recover it. 00:33:40.158 [2024-11-27 07:28:51.143193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.158 [2024-11-27 07:28:51.143223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.158 qpair failed and we were unable to recover it. 00:33:40.158 [2024-11-27 07:28:51.143565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.158 [2024-11-27 07:28:51.143593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.158 qpair failed and we were unable to recover it. 00:33:40.158 [2024-11-27 07:28:51.143944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.158 [2024-11-27 07:28:51.143974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.158 qpair failed and we were unable to recover it. 00:33:40.158 [2024-11-27 07:28:51.144334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.158 [2024-11-27 07:28:51.144364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.158 qpair failed and we were unable to recover it. 00:33:40.158 [2024-11-27 07:28:51.144727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.158 [2024-11-27 07:28:51.144755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.158 qpair failed and we were unable to recover it. 00:33:40.158 [2024-11-27 07:28:51.145094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.158 [2024-11-27 07:28:51.145123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.158 qpair failed and we were unable to recover it. 00:33:40.158 [2024-11-27 07:28:51.145504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.158 [2024-11-27 07:28:51.145534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.158 qpair failed and we were unable to recover it. 00:33:40.158 [2024-11-27 07:28:51.145909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.158 [2024-11-27 07:28:51.145937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.158 qpair failed and we were unable to recover it. 00:33:40.158 [2024-11-27 07:28:51.146283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.158 [2024-11-27 07:28:51.146313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.158 qpair failed and we were unable to recover it. 00:33:40.158 [2024-11-27 07:28:51.146672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.158 [2024-11-27 07:28:51.146706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.158 qpair failed and we were unable to recover it. 00:33:40.158 [2024-11-27 07:28:51.147061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.158 [2024-11-27 07:28:51.147089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.158 qpair failed and we were unable to recover it. 00:33:40.158 [2024-11-27 07:28:51.147454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.158 [2024-11-27 07:28:51.147484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.158 qpair failed and we were unable to recover it. 00:33:40.158 [2024-11-27 07:28:51.147847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.158 [2024-11-27 07:28:51.147877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.159 qpair failed and we were unable to recover it. 00:33:40.159 [2024-11-27 07:28:51.148244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.159 [2024-11-27 07:28:51.148273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.159 qpair failed and we were unable to recover it. 00:33:40.159 [2024-11-27 07:28:51.148640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.159 [2024-11-27 07:28:51.148668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.159 qpair failed and we were unable to recover it. 00:33:40.159 [2024-11-27 07:28:51.149032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.159 [2024-11-27 07:28:51.149060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.159 qpair failed and we were unable to recover it. 00:33:40.159 [2024-11-27 07:28:51.149429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.159 [2024-11-27 07:28:51.149458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.159 qpair failed and we were unable to recover it. 00:33:40.159 [2024-11-27 07:28:51.149823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.159 [2024-11-27 07:28:51.149852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.159 qpair failed and we were unable to recover it. 00:33:40.159 [2024-11-27 07:28:51.150221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.159 [2024-11-27 07:28:51.150250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.159 qpair failed and we were unable to recover it. 00:33:40.159 [2024-11-27 07:28:51.150647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.159 [2024-11-27 07:28:51.150675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.159 qpair failed and we were unable to recover it. 00:33:40.159 [2024-11-27 07:28:51.151010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.159 [2024-11-27 07:28:51.151039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.159 qpair failed and we were unable to recover it. 00:33:40.159 [2024-11-27 07:28:51.151421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.159 [2024-11-27 07:28:51.151452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.159 qpair failed and we were unable to recover it. 00:33:40.159 [2024-11-27 07:28:51.151797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.159 [2024-11-27 07:28:51.151827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.159 qpair failed and we were unable to recover it. 00:33:40.159 [2024-11-27 07:28:51.152178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.159 [2024-11-27 07:28:51.152208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.159 qpair failed and we were unable to recover it. 00:33:40.159 [2024-11-27 07:28:51.152462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.159 [2024-11-27 07:28:51.152491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.159 qpair failed and we were unable to recover it. 00:33:40.159 [2024-11-27 07:28:51.152834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.159 [2024-11-27 07:28:51.152861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.159 qpair failed and we were unable to recover it. 00:33:40.159 [2024-11-27 07:28:51.153120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.159 [2024-11-27 07:28:51.153148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.159 qpair failed and we were unable to recover it. 00:33:40.159 [2024-11-27 07:28:51.153550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.159 [2024-11-27 07:28:51.153579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.159 qpair failed and we were unable to recover it. 00:33:40.159 [2024-11-27 07:28:51.153934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.159 [2024-11-27 07:28:51.153962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.159 qpair failed and we were unable to recover it. 00:33:40.159 [2024-11-27 07:28:51.154392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.159 [2024-11-27 07:28:51.154423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.159 qpair failed and we were unable to recover it. 00:33:40.159 [2024-11-27 07:28:51.154766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.160 [2024-11-27 07:28:51.154796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.160 qpair failed and we were unable to recover it. 00:33:40.160 [2024-11-27 07:28:51.155178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.160 [2024-11-27 07:28:51.155209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.160 qpair failed and we were unable to recover it. 00:33:40.160 [2024-11-27 07:28:51.155622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.160 [2024-11-27 07:28:51.155651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.160 qpair failed and we were unable to recover it. 00:33:40.160 [2024-11-27 07:28:51.155992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.160 [2024-11-27 07:28:51.156020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.160 qpair failed and we were unable to recover it. 00:33:40.160 [2024-11-27 07:28:51.156378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.160 [2024-11-27 07:28:51.156408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.160 qpair failed and we were unable to recover it. 00:33:40.160 [2024-11-27 07:28:51.156754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.160 [2024-11-27 07:28:51.156784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.160 qpair failed and we were unable to recover it. 00:33:40.160 [2024-11-27 07:28:51.157151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.160 [2024-11-27 07:28:51.157190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.160 qpair failed and we were unable to recover it. 00:33:40.160 [2024-11-27 07:28:51.157552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.160 [2024-11-27 07:28:51.157582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.160 qpair failed and we were unable to recover it. 00:33:40.160 [2024-11-27 07:28:51.157949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.160 [2024-11-27 07:28:51.157978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.160 qpair failed and we were unable to recover it. 00:33:40.160 [2024-11-27 07:28:51.158345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.160 [2024-11-27 07:28:51.158375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.160 qpair failed and we were unable to recover it. 00:33:40.160 [2024-11-27 07:28:51.158733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.160 [2024-11-27 07:28:51.158762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.160 qpair failed and we were unable to recover it. 00:33:40.160 [2024-11-27 07:28:51.159131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.160 [2024-11-27 07:28:51.159170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.160 qpair failed and we were unable to recover it. 00:33:40.160 [2024-11-27 07:28:51.159525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.160 [2024-11-27 07:28:51.159554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.160 qpair failed and we were unable to recover it. 00:33:40.160 [2024-11-27 07:28:51.159991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.160 [2024-11-27 07:28:51.160021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.160 qpair failed and we were unable to recover it. 00:33:40.160 [2024-11-27 07:28:51.160389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.160 [2024-11-27 07:28:51.160419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.160 qpair failed and we were unable to recover it. 00:33:40.160 [2024-11-27 07:28:51.160786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.160 [2024-11-27 07:28:51.160815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.160 qpair failed and we were unable to recover it. 00:33:40.160 [2024-11-27 07:28:51.161044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.160 [2024-11-27 07:28:51.161071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.160 qpair failed and we were unable to recover it. 00:33:40.160 [2024-11-27 07:28:51.161428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.160 [2024-11-27 07:28:51.161458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.160 qpair failed and we were unable to recover it. 00:33:40.160 [2024-11-27 07:28:51.161804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.161 [2024-11-27 07:28:51.161834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.161 qpair failed and we were unable to recover it. 00:33:40.161 [2024-11-27 07:28:51.162209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.161 [2024-11-27 07:28:51.162238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.161 qpair failed and we were unable to recover it. 00:33:40.161 [2024-11-27 07:28:51.162659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.161 [2024-11-27 07:28:51.162689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.161 qpair failed and we were unable to recover it. 00:33:40.161 [2024-11-27 07:28:51.163033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.161 [2024-11-27 07:28:51.163063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.161 qpair failed and we were unable to recover it. 00:33:40.161 [2024-11-27 07:28:51.163404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.161 [2024-11-27 07:28:51.163435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.161 qpair failed and we were unable to recover it. 00:33:40.161 [2024-11-27 07:28:51.163772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.161 [2024-11-27 07:28:51.163802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.161 qpair failed and we were unable to recover it. 00:33:40.161 [2024-11-27 07:28:51.164175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.161 [2024-11-27 07:28:51.164206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.161 qpair failed and we were unable to recover it. 00:33:40.161 [2024-11-27 07:28:51.164541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.161 [2024-11-27 07:28:51.164571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.161 qpair failed and we were unable to recover it. 00:33:40.161 [2024-11-27 07:28:51.164940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.161 [2024-11-27 07:28:51.164969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.161 qpair failed and we were unable to recover it. 00:33:40.161 [2024-11-27 07:28:51.165254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.161 [2024-11-27 07:28:51.165285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.161 qpair failed and we were unable to recover it. 00:33:40.161 [2024-11-27 07:28:51.165677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.161 [2024-11-27 07:28:51.165705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.161 qpair failed and we were unable to recover it. 00:33:40.161 [2024-11-27 07:28:51.166067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.161 [2024-11-27 07:28:51.166096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.161 qpair failed and we were unable to recover it. 00:33:40.161 [2024-11-27 07:28:51.166459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.161 [2024-11-27 07:28:51.166489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.161 qpair failed and we were unable to recover it. 00:33:40.161 [2024-11-27 07:28:51.166920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.161 [2024-11-27 07:28:51.166949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.161 qpair failed and we were unable to recover it. 00:33:40.161 [2024-11-27 07:28:51.167360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.161 [2024-11-27 07:28:51.167389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.161 qpair failed and we were unable to recover it. 00:33:40.161 [2024-11-27 07:28:51.167730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.161 [2024-11-27 07:28:51.167760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.161 qpair failed and we were unable to recover it. 00:33:40.161 [2024-11-27 07:28:51.168134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.161 [2024-11-27 07:28:51.168174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.161 qpair failed and we were unable to recover it. 00:33:40.161 [2024-11-27 07:28:51.168524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.161 [2024-11-27 07:28:51.168554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.161 qpair failed and we were unable to recover it. 00:33:40.161 [2024-11-27 07:28:51.168793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.161 [2024-11-27 07:28:51.168824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.161 qpair failed and we were unable to recover it. 00:33:40.162 [2024-11-27 07:28:51.169183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.162 [2024-11-27 07:28:51.169214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.162 qpair failed and we were unable to recover it. 00:33:40.162 [2024-11-27 07:28:51.169562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.162 [2024-11-27 07:28:51.169592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.162 qpair failed and we were unable to recover it. 00:33:40.162 [2024-11-27 07:28:51.169956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.162 [2024-11-27 07:28:51.169985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.162 qpair failed and we were unable to recover it. 00:33:40.162 [2024-11-27 07:28:51.170230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.162 [2024-11-27 07:28:51.170263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.162 qpair failed and we were unable to recover it. 00:33:40.162 [2024-11-27 07:28:51.170601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.162 [2024-11-27 07:28:51.170630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.162 qpair failed and we were unable to recover it. 00:33:40.162 [2024-11-27 07:28:51.171003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.162 [2024-11-27 07:28:51.171032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.162 qpair failed and we were unable to recover it. 00:33:40.162 [2024-11-27 07:28:51.171384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.162 [2024-11-27 07:28:51.171416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.162 qpair failed and we were unable to recover it. 00:33:40.162 [2024-11-27 07:28:51.171766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.162 [2024-11-27 07:28:51.171794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.162 qpair failed and we were unable to recover it. 00:33:40.162 [2024-11-27 07:28:51.172041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.162 [2024-11-27 07:28:51.172069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.162 qpair failed and we were unable to recover it. 00:33:40.162 [2024-11-27 07:28:51.172435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.162 [2024-11-27 07:28:51.172465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.162 qpair failed and we were unable to recover it. 00:33:40.162 [2024-11-27 07:28:51.172824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.162 [2024-11-27 07:28:51.172859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.162 qpair failed and we were unable to recover it. 00:33:40.162 [2024-11-27 07:28:51.173315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.162 [2024-11-27 07:28:51.173345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.162 qpair failed and we were unable to recover it. 00:33:40.162 [2024-11-27 07:28:51.173718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.162 [2024-11-27 07:28:51.173746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.162 qpair failed and we were unable to recover it. 00:33:40.162 [2024-11-27 07:28:51.174120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.162 [2024-11-27 07:28:51.174148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.162 qpair failed and we were unable to recover it. 00:33:40.162 [2024-11-27 07:28:51.174509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.162 [2024-11-27 07:28:51.174540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.162 qpair failed and we were unable to recover it. 00:33:40.162 [2024-11-27 07:28:51.174914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.162 [2024-11-27 07:28:51.174943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.162 qpair failed and we were unable to recover it. 00:33:40.162 [2024-11-27 07:28:51.175309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.162 [2024-11-27 07:28:51.175339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.162 qpair failed and we were unable to recover it. 00:33:40.162 [2024-11-27 07:28:51.175470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.162 [2024-11-27 07:28:51.175501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.162 qpair failed and we were unable to recover it. 00:33:40.162 [2024-11-27 07:28:51.175848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.162 [2024-11-27 07:28:51.175877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.162 qpair failed and we were unable to recover it. 00:33:40.162 [2024-11-27 07:28:51.176252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.162 [2024-11-27 07:28:51.176282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.162 qpair failed and we were unable to recover it. 00:33:40.162 [2024-11-27 07:28:51.176648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.162 [2024-11-27 07:28:51.176676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.162 qpair failed and we were unable to recover it. 00:33:40.162 [2024-11-27 07:28:51.177040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.162 [2024-11-27 07:28:51.177069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.162 qpair failed and we were unable to recover it. 00:33:40.162 [2024-11-27 07:28:51.177411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.163 [2024-11-27 07:28:51.177442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.163 qpair failed and we were unable to recover it. 00:33:40.163 [2024-11-27 07:28:51.177806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.163 [2024-11-27 07:28:51.177835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.163 qpair failed and we were unable to recover it. 00:33:40.163 [2024-11-27 07:28:51.178205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.163 [2024-11-27 07:28:51.178235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.163 qpair failed and we were unable to recover it. 00:33:40.163 [2024-11-27 07:28:51.178600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.163 [2024-11-27 07:28:51.178629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.163 qpair failed and we were unable to recover it. 00:33:40.163 [2024-11-27 07:28:51.178993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.163 [2024-11-27 07:28:51.179021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.163 qpair failed and we were unable to recover it. 00:33:40.163 [2024-11-27 07:28:51.179379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.163 [2024-11-27 07:28:51.179409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.163 qpair failed and we were unable to recover it. 00:33:40.163 [2024-11-27 07:28:51.179756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.163 [2024-11-27 07:28:51.179785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.163 qpair failed and we were unable to recover it. 00:33:40.163 [2024-11-27 07:28:51.180145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.163 [2024-11-27 07:28:51.180185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.163 qpair failed and we were unable to recover it. 00:33:40.163 [2024-11-27 07:28:51.180530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.163 [2024-11-27 07:28:51.180560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.163 qpair failed and we were unable to recover it. 00:33:40.163 [2024-11-27 07:28:51.180936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.163 [2024-11-27 07:28:51.180965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.163 qpair failed and we were unable to recover it. 00:33:40.163 [2024-11-27 07:28:51.181320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.163 [2024-11-27 07:28:51.181351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.163 qpair failed and we were unable to recover it. 00:33:40.163 [2024-11-27 07:28:51.181709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.163 [2024-11-27 07:28:51.181737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.163 qpair failed and we were unable to recover it. 00:33:40.163 [2024-11-27 07:28:51.182089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.163 [2024-11-27 07:28:51.182119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.163 qpair failed and we were unable to recover it. 00:33:40.163 [2024-11-27 07:28:51.182492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.164 [2024-11-27 07:28:51.182523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.164 qpair failed and we were unable to recover it. 00:33:40.164 [2024-11-27 07:28:51.182881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.164 [2024-11-27 07:28:51.182910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.164 qpair failed and we were unable to recover it. 00:33:40.164 [2024-11-27 07:28:51.183272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.164 [2024-11-27 07:28:51.183308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.164 qpair failed and we were unable to recover it. 00:33:40.164 [2024-11-27 07:28:51.183584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.164 [2024-11-27 07:28:51.183613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.164 qpair failed and we were unable to recover it. 00:33:40.164 [2024-11-27 07:28:51.183973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.164 [2024-11-27 07:28:51.184002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.164 qpair failed and we were unable to recover it. 00:33:40.164 [2024-11-27 07:28:51.184258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.164 [2024-11-27 07:28:51.184286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.164 qpair failed and we were unable to recover it. 00:33:40.164 [2024-11-27 07:28:51.184656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.164 [2024-11-27 07:28:51.184684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.164 qpair failed and we were unable to recover it. 00:33:40.164 [2024-11-27 07:28:51.185027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.164 [2024-11-27 07:28:51.185057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.164 qpair failed and we were unable to recover it. 00:33:40.164 [2024-11-27 07:28:51.185406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.164 [2024-11-27 07:28:51.185436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.164 qpair failed and we were unable to recover it. 00:33:40.164 [2024-11-27 07:28:51.185794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.164 [2024-11-27 07:28:51.185823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.164 qpair failed and we were unable to recover it. 00:33:40.164 [2024-11-27 07:28:51.186179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.164 [2024-11-27 07:28:51.186210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.164 qpair failed and we were unable to recover it. 00:33:40.164 [2024-11-27 07:28:51.186479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.164 [2024-11-27 07:28:51.186508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.164 qpair failed and we were unable to recover it. 00:33:40.164 [2024-11-27 07:28:51.186787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.164 [2024-11-27 07:28:51.186815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.164 qpair failed and we were unable to recover it. 00:33:40.164 [2024-11-27 07:28:51.187173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.164 [2024-11-27 07:28:51.187203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.164 qpair failed and we were unable to recover it. 00:33:40.164 [2024-11-27 07:28:51.187602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.164 [2024-11-27 07:28:51.187631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.164 qpair failed and we were unable to recover it. 00:33:40.164 [2024-11-27 07:28:51.188002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.164 [2024-11-27 07:28:51.188030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.164 qpair failed and we were unable to recover it. 00:33:40.164 [2024-11-27 07:28:51.188390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.165 [2024-11-27 07:28:51.188421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.165 qpair failed and we were unable to recover it. 00:33:40.165 [2024-11-27 07:28:51.188761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.165 [2024-11-27 07:28:51.188790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.165 qpair failed and we were unable to recover it. 00:33:40.165 [2024-11-27 07:28:51.189153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.165 [2024-11-27 07:28:51.189197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.165 qpair failed and we were unable to recover it. 00:33:40.165 [2024-11-27 07:28:51.189441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.165 [2024-11-27 07:28:51.189469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.165 qpair failed and we were unable to recover it. 00:33:40.165 [2024-11-27 07:28:51.189731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.165 [2024-11-27 07:28:51.189760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.165 qpair failed and we were unable to recover it. 00:33:40.165 [2024-11-27 07:28:51.190107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.165 [2024-11-27 07:28:51.190135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.165 qpair failed and we were unable to recover it. 00:33:40.165 [2024-11-27 07:28:51.190507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.165 [2024-11-27 07:28:51.190537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.165 qpair failed and we were unable to recover it. 00:33:40.165 [2024-11-27 07:28:51.190898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.165 [2024-11-27 07:28:51.190926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.165 qpair failed and we were unable to recover it. 00:33:40.165 [2024-11-27 07:28:51.191322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.165 [2024-11-27 07:28:51.191352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.165 qpair failed and we were unable to recover it. 00:33:40.165 [2024-11-27 07:28:51.191705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.165 [2024-11-27 07:28:51.191734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.165 qpair failed and we were unable to recover it. 00:33:40.165 [2024-11-27 07:28:51.192096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.165 [2024-11-27 07:28:51.192124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.165 qpair failed and we were unable to recover it. 00:33:40.165 [2024-11-27 07:28:51.192498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.165 [2024-11-27 07:28:51.192527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.165 qpair failed and we were unable to recover it. 00:33:40.165 [2024-11-27 07:28:51.192870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.165 [2024-11-27 07:28:51.192899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.165 qpair failed and we were unable to recover it. 00:33:40.165 [2024-11-27 07:28:51.193313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.165 [2024-11-27 07:28:51.193349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.165 qpair failed and we were unable to recover it. 00:33:40.165 [2024-11-27 07:28:51.193719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.165 [2024-11-27 07:28:51.193748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.165 qpair failed and we were unable to recover it. 00:33:40.165 [2024-11-27 07:28:51.193993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.165 [2024-11-27 07:28:51.194025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.165 qpair failed and we were unable to recover it. 00:33:40.165 [2024-11-27 07:28:51.194395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.165 [2024-11-27 07:28:51.194425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.165 qpair failed and we were unable to recover it. 00:33:40.165 [2024-11-27 07:28:51.194775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.165 [2024-11-27 07:28:51.194803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.165 qpair failed and we were unable to recover it. 00:33:40.165 [2024-11-27 07:28:51.195181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.166 [2024-11-27 07:28:51.195212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.166 qpair failed and we were unable to recover it. 00:33:40.166 [2024-11-27 07:28:51.195567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.166 [2024-11-27 07:28:51.195596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.166 qpair failed and we were unable to recover it. 00:33:40.166 [2024-11-27 07:28:51.195958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.166 [2024-11-27 07:28:51.195987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.166 qpair failed and we were unable to recover it. 00:33:40.166 [2024-11-27 07:28:51.196340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.166 [2024-11-27 07:28:51.196369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.166 qpair failed and we were unable to recover it. 00:33:40.166 [2024-11-27 07:28:51.196753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.166 [2024-11-27 07:28:51.196782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.166 qpair failed and we were unable to recover it. 00:33:40.166 [2024-11-27 07:28:51.197145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.166 [2024-11-27 07:28:51.197184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.166 qpair failed and we were unable to recover it. 00:33:40.166 [2024-11-27 07:28:51.197433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.166 [2024-11-27 07:28:51.197465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.166 qpair failed and we were unable to recover it. 00:33:40.166 [2024-11-27 07:28:51.197712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.166 [2024-11-27 07:28:51.197740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.166 qpair failed and we were unable to recover it. 00:33:40.166 [2024-11-27 07:28:51.198105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.166 [2024-11-27 07:28:51.198132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.166 qpair failed and we were unable to recover it. 00:33:40.166 [2024-11-27 07:28:51.198518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.166 [2024-11-27 07:28:51.198549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.166 qpair failed and we were unable to recover it. 00:33:40.166 [2024-11-27 07:28:51.198915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.166 [2024-11-27 07:28:51.198945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.166 qpair failed and we were unable to recover it. 00:33:40.166 [2024-11-27 07:28:51.199379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.166 [2024-11-27 07:28:51.199410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.166 qpair failed and we were unable to recover it. 00:33:40.166 [2024-11-27 07:28:51.199768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.166 [2024-11-27 07:28:51.199797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.166 qpair failed and we were unable to recover it. 00:33:40.166 [2024-11-27 07:28:51.200086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.166 [2024-11-27 07:28:51.200115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.166 qpair failed and we were unable to recover it. 00:33:40.166 [2024-11-27 07:28:51.200477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.166 [2024-11-27 07:28:51.200508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.166 qpair failed and we were unable to recover it. 00:33:40.166 [2024-11-27 07:28:51.200868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.166 [2024-11-27 07:28:51.200897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.166 qpair failed and we were unable to recover it. 00:33:40.166 [2024-11-27 07:28:51.201234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.166 [2024-11-27 07:28:51.201264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.166 qpair failed and we were unable to recover it. 00:33:40.166 [2024-11-27 07:28:51.201697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.166 [2024-11-27 07:28:51.201725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.166 qpair failed and we were unable to recover it. 00:33:40.166 [2024-11-27 07:28:51.201965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.167 [2024-11-27 07:28:51.201995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.167 qpair failed and we were unable to recover it. 00:33:40.167 [2024-11-27 07:28:51.202338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.167 [2024-11-27 07:28:51.202370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.167 qpair failed and we were unable to recover it. 00:33:40.167 [2024-11-27 07:28:51.202722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.167 [2024-11-27 07:28:51.202751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.167 qpair failed and we were unable to recover it. 00:33:40.167 [2024-11-27 07:28:51.203115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.167 [2024-11-27 07:28:51.203144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.167 qpair failed and we were unable to recover it. 00:33:40.167 [2024-11-27 07:28:51.203493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.167 [2024-11-27 07:28:51.203521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.167 qpair failed and we were unable to recover it. 00:33:40.167 [2024-11-27 07:28:51.203865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.167 [2024-11-27 07:28:51.203894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.167 qpair failed and we were unable to recover it. 00:33:40.167 [2024-11-27 07:28:51.204247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.167 [2024-11-27 07:28:51.204277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.167 qpair failed and we were unable to recover it. 00:33:40.167 [2024-11-27 07:28:51.204594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.167 [2024-11-27 07:28:51.204623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.167 qpair failed and we were unable to recover it. 00:33:40.167 [2024-11-27 07:28:51.204979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.167 [2024-11-27 07:28:51.205008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.167 qpair failed and we were unable to recover it. 00:33:40.167 [2024-11-27 07:28:51.205389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.167 [2024-11-27 07:28:51.205419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.167 qpair failed and we were unable to recover it. 00:33:40.167 [2024-11-27 07:28:51.205780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.167 [2024-11-27 07:28:51.205809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.167 qpair failed and we were unable to recover it. 00:33:40.167 [2024-11-27 07:28:51.206183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.167 [2024-11-27 07:28:51.206213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.167 qpair failed and we were unable to recover it. 00:33:40.167 [2024-11-27 07:28:51.206590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.167 [2024-11-27 07:28:51.206618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.167 qpair failed and we were unable to recover it. 00:33:40.167 [2024-11-27 07:28:51.206958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.167 [2024-11-27 07:28:51.206986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.167 qpair failed and we were unable to recover it. 00:33:40.167 [2024-11-27 07:28:51.207234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.167 [2024-11-27 07:28:51.207263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.167 qpair failed and we were unable to recover it. 00:33:40.167 [2024-11-27 07:28:51.207605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.167 [2024-11-27 07:28:51.207634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.167 qpair failed and we were unable to recover it. 00:33:40.167 [2024-11-27 07:28:51.208001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.167 [2024-11-27 07:28:51.208029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.167 qpair failed and we were unable to recover it. 00:33:40.167 [2024-11-27 07:28:51.208385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.168 [2024-11-27 07:28:51.208415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.168 qpair failed and we were unable to recover it. 00:33:40.168 [2024-11-27 07:28:51.208681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.168 [2024-11-27 07:28:51.208711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.168 qpair failed and we were unable to recover it. 00:33:40.168 [2024-11-27 07:28:51.208962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.168 [2024-11-27 07:28:51.208992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.168 qpair failed and we were unable to recover it. 00:33:40.168 [2024-11-27 07:28:51.209339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.168 [2024-11-27 07:28:51.209369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.168 qpair failed and we were unable to recover it. 00:33:40.168 [2024-11-27 07:28:51.209736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.168 [2024-11-27 07:28:51.209765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.168 qpair failed and we were unable to recover it. 00:33:40.168 [2024-11-27 07:28:51.210123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.168 [2024-11-27 07:28:51.210152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.168 qpair failed and we were unable to recover it. 00:33:40.168 [2024-11-27 07:28:51.210593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.168 [2024-11-27 07:28:51.210622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.168 qpair failed and we were unable to recover it. 00:33:40.168 [2024-11-27 07:28:51.211053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.168 [2024-11-27 07:28:51.211082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.168 qpair failed and we were unable to recover it. 00:33:40.168 [2024-11-27 07:28:51.211440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.168 [2024-11-27 07:28:51.211469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.168 qpair failed and we were unable to recover it. 00:33:40.168 [2024-11-27 07:28:51.211838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.168 [2024-11-27 07:28:51.211867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.168 qpair failed and we were unable to recover it. 00:33:40.168 [2024-11-27 07:28:51.212243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.168 [2024-11-27 07:28:51.212272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.168 qpair failed and we were unable to recover it. 00:33:40.168 [2024-11-27 07:28:51.212679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.168 [2024-11-27 07:28:51.212707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.168 qpair failed and we were unable to recover it. 00:33:40.168 [2024-11-27 07:28:51.212969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.168 [2024-11-27 07:28:51.212997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.168 qpair failed and we were unable to recover it. 00:33:40.168 [2024-11-27 07:28:51.213325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.168 [2024-11-27 07:28:51.213357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.168 qpair failed and we were unable to recover it. 00:33:40.168 [2024-11-27 07:28:51.213765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.168 [2024-11-27 07:28:51.213793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.168 qpair failed and we were unable to recover it. 00:33:40.168 [2024-11-27 07:28:51.214169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.168 [2024-11-27 07:28:51.214198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.168 qpair failed and we were unable to recover it. 00:33:40.168 [2024-11-27 07:28:51.214540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.168 [2024-11-27 07:28:51.214570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.168 qpair failed and we were unable to recover it. 00:33:40.168 [2024-11-27 07:28:51.214930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.168 [2024-11-27 07:28:51.214958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.168 qpair failed and we were unable to recover it. 00:33:40.168 [2024-11-27 07:28:51.215327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.169 [2024-11-27 07:28:51.215358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.169 qpair failed and we were unable to recover it. 00:33:40.169 [2024-11-27 07:28:51.215708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.169 [2024-11-27 07:28:51.215737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.169 qpair failed and we were unable to recover it. 00:33:40.169 [2024-11-27 07:28:51.215982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.169 [2024-11-27 07:28:51.216013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.169 qpair failed and we were unable to recover it. 00:33:40.169 [2024-11-27 07:28:51.216392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.169 [2024-11-27 07:28:51.216421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.169 qpair failed and we were unable to recover it. 00:33:40.169 [2024-11-27 07:28:51.216672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.169 [2024-11-27 07:28:51.216704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.169 qpair failed and we were unable to recover it. 00:33:40.169 [2024-11-27 07:28:51.217076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.169 [2024-11-27 07:28:51.217107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.169 qpair failed and we were unable to recover it. 00:33:40.169 [2024-11-27 07:28:51.217462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.169 [2024-11-27 07:28:51.217492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.169 qpair failed and we were unable to recover it. 00:33:40.169 [2024-11-27 07:28:51.217854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.169 [2024-11-27 07:28:51.217883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.169 qpair failed and we were unable to recover it. 00:33:40.169 [2024-11-27 07:28:51.218250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.169 [2024-11-27 07:28:51.218280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.169 qpair failed and we were unable to recover it. 00:33:40.169 [2024-11-27 07:28:51.218620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.169 [2024-11-27 07:28:51.218649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.169 qpair failed and we were unable to recover it. 00:33:40.169 [2024-11-27 07:28:51.219017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.169 [2024-11-27 07:28:51.219051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.169 qpair failed and we were unable to recover it. 00:33:40.169 [2024-11-27 07:28:51.219401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.169 [2024-11-27 07:28:51.219432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.169 qpair failed and we were unable to recover it. 00:33:40.169 [2024-11-27 07:28:51.219787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.169 [2024-11-27 07:28:51.219816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.169 qpair failed and we were unable to recover it. 00:33:40.169 [2024-11-27 07:28:51.220153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.169 [2024-11-27 07:28:51.220201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.169 qpair failed and we were unable to recover it. 00:33:40.169 [2024-11-27 07:28:51.220562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.169 [2024-11-27 07:28:51.220592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.169 qpair failed and we were unable to recover it. 00:33:40.169 [2024-11-27 07:28:51.220928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.169 [2024-11-27 07:28:51.220957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.169 qpair failed and we were unable to recover it. 00:33:40.169 [2024-11-27 07:28:51.221229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.170 [2024-11-27 07:28:51.221258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.170 qpair failed and we were unable to recover it. 00:33:40.170 [2024-11-27 07:28:51.221512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.170 [2024-11-27 07:28:51.221544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.170 qpair failed and we were unable to recover it. 00:33:40.170 [2024-11-27 07:28:51.221991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.170 [2024-11-27 07:28:51.222020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.170 qpair failed and we were unable to recover it. 00:33:40.170 [2024-11-27 07:28:51.222394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.170 [2024-11-27 07:28:51.222424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.170 qpair failed and we were unable to recover it. 00:33:40.170 [2024-11-27 07:28:51.222675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.170 [2024-11-27 07:28:51.222704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.170 qpair failed and we were unable to recover it. 00:33:40.170 [2024-11-27 07:28:51.223068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.170 [2024-11-27 07:28:51.223097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.170 qpair failed and we were unable to recover it. 00:33:40.170 [2024-11-27 07:28:51.223461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.170 [2024-11-27 07:28:51.223489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.170 qpair failed and we were unable to recover it. 00:33:40.170 [2024-11-27 07:28:51.223843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.170 [2024-11-27 07:28:51.223872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.170 qpair failed and we were unable to recover it. 00:33:40.170 [2024-11-27 07:28:51.224233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.170 [2024-11-27 07:28:51.224263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.170 qpair failed and we were unable to recover it. 00:33:40.170 [2024-11-27 07:28:51.224631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.170 [2024-11-27 07:28:51.224660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.170 qpair failed and we were unable to recover it. 00:33:40.170 [2024-11-27 07:28:51.225028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.170 [2024-11-27 07:28:51.225057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.170 qpair failed and we were unable to recover it. 00:33:40.170 [2024-11-27 07:28:51.225414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.170 [2024-11-27 07:28:51.225442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.170 qpair failed and we were unable to recover it. 00:33:40.170 [2024-11-27 07:28:51.225850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.170 [2024-11-27 07:28:51.225878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.170 qpair failed and we were unable to recover it. 00:33:40.170 [2024-11-27 07:28:51.226111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.170 [2024-11-27 07:28:51.226140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.170 qpair failed and we were unable to recover it. 00:33:40.170 [2024-11-27 07:28:51.226544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.170 [2024-11-27 07:28:51.226575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.170 qpair failed and we were unable to recover it. 00:33:40.171 [2024-11-27 07:28:51.226924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.171 [2024-11-27 07:28:51.226954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.171 qpair failed and we were unable to recover it. 00:33:40.171 [2024-11-27 07:28:51.227323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.171 [2024-11-27 07:28:51.227354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.171 qpair failed and we were unable to recover it. 00:33:40.171 [2024-11-27 07:28:51.227706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.171 [2024-11-27 07:28:51.227736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.171 qpair failed and we were unable to recover it. 00:33:40.171 [2024-11-27 07:28:51.228102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.171 [2024-11-27 07:28:51.228131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.171 qpair failed and we were unable to recover it. 00:33:40.171 [2024-11-27 07:28:51.228539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.171 [2024-11-27 07:28:51.228568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.171 qpair failed and we were unable to recover it. 00:33:40.171 [2024-11-27 07:28:51.228913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.171 [2024-11-27 07:28:51.228942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.171 qpair failed and we were unable to recover it. 00:33:40.171 [2024-11-27 07:28:51.229231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.171 [2024-11-27 07:28:51.229266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.171 qpair failed and we were unable to recover it. 00:33:40.171 [2024-11-27 07:28:51.229639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.171 [2024-11-27 07:28:51.229669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.171 qpair failed and we were unable to recover it. 00:33:40.171 [2024-11-27 07:28:51.230027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.171 [2024-11-27 07:28:51.230057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.171 qpair failed and we were unable to recover it. 00:33:40.171 [2024-11-27 07:28:51.230407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.171 [2024-11-27 07:28:51.230437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.171 qpair failed and we were unable to recover it. 00:33:40.171 [2024-11-27 07:28:51.230789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.171 [2024-11-27 07:28:51.230818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.171 qpair failed and we were unable to recover it. 00:33:40.171 [2024-11-27 07:28:51.231172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.171 [2024-11-27 07:28:51.231204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.171 qpair failed and we were unable to recover it. 00:33:40.171 [2024-11-27 07:28:51.231569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.171 [2024-11-27 07:28:51.231597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.171 qpair failed and we were unable to recover it. 00:33:40.171 [2024-11-27 07:28:51.231958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.171 [2024-11-27 07:28:51.231989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.171 qpair failed and we were unable to recover it. 00:33:40.172 [2024-11-27 07:28:51.232338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.172 [2024-11-27 07:28:51.232369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.172 qpair failed and we were unable to recover it. 00:33:40.172 [2024-11-27 07:28:51.232733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.172 [2024-11-27 07:28:51.232762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.172 qpair failed and we were unable to recover it. 00:33:40.172 [2024-11-27 07:28:51.233136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.172 [2024-11-27 07:28:51.233177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.172 qpair failed and we were unable to recover it. 00:33:40.172 [2024-11-27 07:28:51.233470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.172 [2024-11-27 07:28:51.233500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.172 qpair failed and we were unable to recover it. 00:33:40.172 [2024-11-27 07:28:51.233851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.172 [2024-11-27 07:28:51.233881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.172 qpair failed and we were unable to recover it. 00:33:40.172 [2024-11-27 07:28:51.234233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.172 [2024-11-27 07:28:51.234265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.172 qpair failed and we were unable to recover it. 00:33:40.172 [2024-11-27 07:28:51.234602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.172 [2024-11-27 07:28:51.234633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.172 qpair failed and we were unable to recover it. 00:33:40.172 [2024-11-27 07:28:51.235003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.172 [2024-11-27 07:28:51.235032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.172 qpair failed and we were unable to recover it. 00:33:40.172 [2024-11-27 07:28:51.235411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.172 [2024-11-27 07:28:51.235443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.172 qpair failed and we were unable to recover it. 00:33:40.172 [2024-11-27 07:28:51.235692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.172 [2024-11-27 07:28:51.235724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.172 qpair failed and we were unable to recover it. 00:33:40.172 [2024-11-27 07:28:51.236061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.172 [2024-11-27 07:28:51.236089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.172 qpair failed and we were unable to recover it. 00:33:40.172 [2024-11-27 07:28:51.236346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.172 [2024-11-27 07:28:51.236378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.172 qpair failed and we were unable to recover it. 00:33:40.172 [2024-11-27 07:28:51.236752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.172 [2024-11-27 07:28:51.236782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.172 qpair failed and we were unable to recover it. 00:33:40.172 [2024-11-27 07:28:51.237140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.172 [2024-11-27 07:28:51.237181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.172 qpair failed and we were unable to recover it. 00:33:40.172 [2024-11-27 07:28:51.237588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.172 [2024-11-27 07:28:51.237618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.172 qpair failed and we were unable to recover it. 00:33:40.173 [2024-11-27 07:28:51.237978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.173 [2024-11-27 07:28:51.238015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.173 qpair failed and we were unable to recover it. 00:33:40.173 [2024-11-27 07:28:51.238379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.173 [2024-11-27 07:28:51.238409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.173 qpair failed and we were unable to recover it. 00:33:40.173 [2024-11-27 07:28:51.238755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.173 [2024-11-27 07:28:51.238784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.173 qpair failed and we were unable to recover it. 00:33:40.173 [2024-11-27 07:28:51.239128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.173 [2024-11-27 07:28:51.239156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.173 qpair failed and we were unable to recover it. 00:33:40.173 [2024-11-27 07:28:51.239532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.173 [2024-11-27 07:28:51.239562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.173 qpair failed and we were unable to recover it. 00:33:40.173 [2024-11-27 07:28:51.239970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.173 [2024-11-27 07:28:51.239999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.173 qpair failed and we were unable to recover it. 00:33:40.173 [2024-11-27 07:28:51.240243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.173 [2024-11-27 07:28:51.240276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.173 qpair failed and we were unable to recover it. 00:33:40.173 [2024-11-27 07:28:51.240655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.173 [2024-11-27 07:28:51.240685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.173 qpair failed and we were unable to recover it. 00:33:40.173 [2024-11-27 07:28:51.241047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.173 [2024-11-27 07:28:51.241077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.173 qpair failed and we were unable to recover it. 00:33:40.173 [2024-11-27 07:28:51.241443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.173 [2024-11-27 07:28:51.241474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.173 qpair failed and we were unable to recover it. 00:33:40.173 [2024-11-27 07:28:51.241810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.173 [2024-11-27 07:28:51.241839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.173 qpair failed and we were unable to recover it. 00:33:40.173 [2024-11-27 07:28:51.242188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.173 [2024-11-27 07:28:51.242218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.173 qpair failed and we were unable to recover it. 00:33:40.173 [2024-11-27 07:28:51.242570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.173 [2024-11-27 07:28:51.242598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.173 qpair failed and we were unable to recover it. 00:33:40.173 [2024-11-27 07:28:51.242947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.173 [2024-11-27 07:28:51.242976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.173 qpair failed and we were unable to recover it. 00:33:40.173 [2024-11-27 07:28:51.243341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.173 [2024-11-27 07:28:51.243372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.173 qpair failed and we were unable to recover it. 00:33:40.173 [2024-11-27 07:28:51.243739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.173 [2024-11-27 07:28:51.243768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.173 qpair failed and we were unable to recover it. 00:33:40.173 [2024-11-27 07:28:51.244125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.173 [2024-11-27 07:28:51.244155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.173 qpair failed and we were unable to recover it. 00:33:40.173 [2024-11-27 07:28:51.244547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.173 [2024-11-27 07:28:51.244576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.173 qpair failed and we were unable to recover it. 00:33:40.173 [2024-11-27 07:28:51.244953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.173 [2024-11-27 07:28:51.244984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.173 qpair failed and we were unable to recover it. 00:33:40.173 [2024-11-27 07:28:51.245269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.173 [2024-11-27 07:28:51.245299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.173 qpair failed and we were unable to recover it. 00:33:40.173 [2024-11-27 07:28:51.245680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.173 [2024-11-27 07:28:51.245710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.173 qpair failed and we were unable to recover it. 00:33:40.174 [2024-11-27 07:28:51.246094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.174 [2024-11-27 07:28:51.246124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.174 qpair failed and we were unable to recover it. 00:33:40.174 [2024-11-27 07:28:51.246458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.174 [2024-11-27 07:28:51.246489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.174 qpair failed and we were unable to recover it. 00:33:40.174 [2024-11-27 07:28:51.246845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.174 [2024-11-27 07:28:51.246876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.174 qpair failed and we were unable to recover it. 00:33:40.174 [2024-11-27 07:28:51.247242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.174 [2024-11-27 07:28:51.247273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.174 qpair failed and we were unable to recover it. 00:33:40.174 [2024-11-27 07:28:51.247635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.174 [2024-11-27 07:28:51.247665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.174 qpair failed and we were unable to recover it. 00:33:40.174 [2024-11-27 07:28:51.248026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.174 [2024-11-27 07:28:51.248055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.174 qpair failed and we were unable to recover it. 00:33:40.174 [2024-11-27 07:28:51.248395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.174 [2024-11-27 07:28:51.248427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.174 qpair failed and we were unable to recover it. 00:33:40.174 [2024-11-27 07:28:51.248786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.174 [2024-11-27 07:28:51.248817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.174 qpair failed and we were unable to recover it. 00:33:40.174 [2024-11-27 07:28:51.249181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.174 [2024-11-27 07:28:51.249215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.174 qpair failed and we were unable to recover it. 00:33:40.174 [2024-11-27 07:28:51.249568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.174 [2024-11-27 07:28:51.249597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.174 qpair failed and we were unable to recover it. 00:33:40.174 [2024-11-27 07:28:51.249939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.174 [2024-11-27 07:28:51.249970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.174 qpair failed and we were unable to recover it. 00:33:40.174 [2024-11-27 07:28:51.250331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.174 [2024-11-27 07:28:51.250364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.174 qpair failed and we were unable to recover it. 00:33:40.174 [2024-11-27 07:28:51.250746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.174 [2024-11-27 07:28:51.250776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.174 qpair failed and we were unable to recover it. 00:33:40.174 [2024-11-27 07:28:51.251033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.174 [2024-11-27 07:28:51.251064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.174 qpair failed and we were unable to recover it. 00:33:40.174 [2024-11-27 07:28:51.251522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.174 [2024-11-27 07:28:51.251552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.174 qpair failed and we were unable to recover it. 00:33:40.174 [2024-11-27 07:28:51.251937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.174 [2024-11-27 07:28:51.251967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.174 qpair failed and we were unable to recover it. 00:33:40.174 [2024-11-27 07:28:51.252329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.174 [2024-11-27 07:28:51.252359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.174 qpair failed and we were unable to recover it. 00:33:40.174 [2024-11-27 07:28:51.252723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.174 [2024-11-27 07:28:51.252753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.174 qpair failed and we were unable to recover it. 00:33:40.174 [2024-11-27 07:28:51.253121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.174 [2024-11-27 07:28:51.253150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.174 qpair failed and we were unable to recover it. 00:33:40.174 [2024-11-27 07:28:51.253544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.174 [2024-11-27 07:28:51.253575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.174 qpair failed and we were unable to recover it. 00:33:40.174 [2024-11-27 07:28:51.253937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.174 [2024-11-27 07:28:51.253968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.174 qpair failed and we were unable to recover it. 00:33:40.174 [2024-11-27 07:28:51.254331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.174 [2024-11-27 07:28:51.254362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.174 qpair failed and we were unable to recover it. 00:33:40.174 [2024-11-27 07:28:51.254709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.174 [2024-11-27 07:28:51.254741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.174 qpair failed and we were unable to recover it. 00:33:40.174 [2024-11-27 07:28:51.255105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.174 [2024-11-27 07:28:51.255134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.174 qpair failed and we were unable to recover it. 00:33:40.174 [2024-11-27 07:28:51.255511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.174 [2024-11-27 07:28:51.255547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.174 qpair failed and we were unable to recover it. 00:33:40.174 [2024-11-27 07:28:51.255909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.174 [2024-11-27 07:28:51.255938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.174 qpair failed and we were unable to recover it. 00:33:40.174 [2024-11-27 07:28:51.256188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.174 [2024-11-27 07:28:51.256219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.174 qpair failed and we were unable to recover it. 00:33:40.174 [2024-11-27 07:28:51.256383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.174 [2024-11-27 07:28:51.256415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.174 qpair failed and we were unable to recover it. 00:33:40.174 [2024-11-27 07:28:51.256815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.174 [2024-11-27 07:28:51.256845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.174 qpair failed and we were unable to recover it. 00:33:40.174 [2024-11-27 07:28:51.257222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.174 [2024-11-27 07:28:51.257252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.174 qpair failed and we were unable to recover it. 00:33:40.174 [2024-11-27 07:28:51.257598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.174 [2024-11-27 07:28:51.257629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.174 qpair failed and we were unable to recover it. 00:33:40.174 [2024-11-27 07:28:51.257984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.174 [2024-11-27 07:28:51.258013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.174 qpair failed and we were unable to recover it. 00:33:40.174 [2024-11-27 07:28:51.258399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.174 [2024-11-27 07:28:51.258430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.174 qpair failed and we were unable to recover it. 00:33:40.174 [2024-11-27 07:28:51.258774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.174 [2024-11-27 07:28:51.258802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.174 qpair failed and we were unable to recover it. 00:33:40.174 [2024-11-27 07:28:51.259178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.174 [2024-11-27 07:28:51.259208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.174 qpair failed and we were unable to recover it. 00:33:40.174 [2024-11-27 07:28:51.259564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.174 [2024-11-27 07:28:51.259594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.174 qpair failed and we were unable to recover it. 00:33:40.174 [2024-11-27 07:28:51.259956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.174 [2024-11-27 07:28:51.259987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.174 qpair failed and we were unable to recover it. 00:33:40.174 [2024-11-27 07:28:51.260340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.174 [2024-11-27 07:28:51.260371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.174 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.260617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.175 [2024-11-27 07:28:51.260650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.175 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.261031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.175 [2024-11-27 07:28:51.261062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.175 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.261430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.175 [2024-11-27 07:28:51.261460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.175 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.261827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.175 [2024-11-27 07:28:51.261857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.175 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.262220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.175 [2024-11-27 07:28:51.262252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.175 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.262507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.175 [2024-11-27 07:28:51.262538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.175 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.262895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.175 [2024-11-27 07:28:51.262924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.175 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.263279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.175 [2024-11-27 07:28:51.263310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.175 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.263668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.175 [2024-11-27 07:28:51.263698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.175 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.264057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.175 [2024-11-27 07:28:51.264086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.175 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.264456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.175 [2024-11-27 07:28:51.264487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.175 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.264846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.175 [2024-11-27 07:28:51.264878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.175 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.265232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.175 [2024-11-27 07:28:51.265264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.175 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.265610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.175 [2024-11-27 07:28:51.265645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.175 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.265998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.175 [2024-11-27 07:28:51.266030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.175 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.266375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.175 [2024-11-27 07:28:51.266409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.175 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.266759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.175 [2024-11-27 07:28:51.266788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.175 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.267146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.175 [2024-11-27 07:28:51.267190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.175 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.267548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.175 [2024-11-27 07:28:51.267576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.175 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.267939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.175 [2024-11-27 07:28:51.267971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.175 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.268325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.175 [2024-11-27 07:28:51.268356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.175 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.268736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.175 [2024-11-27 07:28:51.268765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.175 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.269128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.175 [2024-11-27 07:28:51.269171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.175 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.269511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.175 [2024-11-27 07:28:51.269540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.175 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.269906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.175 [2024-11-27 07:28:51.269936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.175 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.270296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.175 [2024-11-27 07:28:51.270329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.175 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.270697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.175 [2024-11-27 07:28:51.270729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.175 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.271109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.175 [2024-11-27 07:28:51.271139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.175 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.271507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.175 [2024-11-27 07:28:51.271537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.175 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.271878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.175 [2024-11-27 07:28:51.271907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.175 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.272261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.175 [2024-11-27 07:28:51.272292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.175 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.272660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.175 [2024-11-27 07:28:51.272691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.175 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.273045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.175 [2024-11-27 07:28:51.273073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.175 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.273460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.175 [2024-11-27 07:28:51.273491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.175 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.273852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.175 [2024-11-27 07:28:51.273881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.175 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.274120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.175 [2024-11-27 07:28:51.274153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.175 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.274553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.175 [2024-11-27 07:28:51.274582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.175 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.274956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.175 [2024-11-27 07:28:51.274988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.175 qpair failed and we were unable to recover it. 00:33:40.175 [2024-11-27 07:28:51.275340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.275372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.176 qpair failed and we were unable to recover it. 00:33:40.176 [2024-11-27 07:28:51.275746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.275779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.176 qpair failed and we were unable to recover it. 00:33:40.176 [2024-11-27 07:28:51.276035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.276070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.176 qpair failed and we were unable to recover it. 00:33:40.176 [2024-11-27 07:28:51.276447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.276477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.176 qpair failed and we were unable to recover it. 00:33:40.176 [2024-11-27 07:28:51.276832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.276861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.176 qpair failed and we were unable to recover it. 00:33:40.176 [2024-11-27 07:28:51.277214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.277244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.176 qpair failed and we were unable to recover it. 00:33:40.176 [2024-11-27 07:28:51.277669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.277698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.176 qpair failed and we were unable to recover it. 00:33:40.176 [2024-11-27 07:28:51.278024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.278057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.176 qpair failed and we were unable to recover it. 00:33:40.176 [2024-11-27 07:28:51.278432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.278462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.176 qpair failed and we were unable to recover it. 00:33:40.176 [2024-11-27 07:28:51.278805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.278835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.176 qpair failed and we were unable to recover it. 00:33:40.176 [2024-11-27 07:28:51.279217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.279248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.176 qpair failed and we were unable to recover it. 00:33:40.176 [2024-11-27 07:28:51.279596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.279624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.176 qpair failed and we were unable to recover it. 00:33:40.176 [2024-11-27 07:28:51.280046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.280076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.176 qpair failed and we were unable to recover it. 00:33:40.176 [2024-11-27 07:28:51.280418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.280450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.176 qpair failed and we were unable to recover it. 00:33:40.176 [2024-11-27 07:28:51.280801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.280830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.176 qpair failed and we were unable to recover it. 00:33:40.176 [2024-11-27 07:28:51.281204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.281237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.176 qpair failed and we were unable to recover it. 00:33:40.176 [2024-11-27 07:28:51.281611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.281641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.176 qpair failed and we were unable to recover it. 00:33:40.176 [2024-11-27 07:28:51.281925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.281956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.176 qpair failed and we were unable to recover it. 00:33:40.176 [2024-11-27 07:28:51.282398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.282430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.176 qpair failed and we were unable to recover it. 00:33:40.176 [2024-11-27 07:28:51.282765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.282795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.176 qpair failed and we were unable to recover it. 00:33:40.176 [2024-11-27 07:28:51.283148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.283195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.176 qpair failed and we were unable to recover it. 00:33:40.176 [2024-11-27 07:28:51.283567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.283599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.176 qpair failed and we were unable to recover it. 00:33:40.176 [2024-11-27 07:28:51.283968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.283996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.176 qpair failed and we were unable to recover it. 00:33:40.176 [2024-11-27 07:28:51.284353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.284387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.176 qpair failed and we were unable to recover it. 00:33:40.176 [2024-11-27 07:28:51.284738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.284770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.176 qpair failed and we were unable to recover it. 00:33:40.176 [2024-11-27 07:28:51.285125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.285154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.176 qpair failed and we were unable to recover it. 00:33:40.176 [2024-11-27 07:28:51.285520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.285549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.176 qpair failed and we were unable to recover it. 00:33:40.176 [2024-11-27 07:28:51.285910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.285939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.176 qpair failed and we were unable to recover it. 00:33:40.176 [2024-11-27 07:28:51.286307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.286337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.176 qpair failed and we were unable to recover it. 00:33:40.176 [2024-11-27 07:28:51.286706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.286735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.176 qpair failed and we were unable to recover it. 00:33:40.176 [2024-11-27 07:28:51.287120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.287149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.176 qpair failed and we were unable to recover it. 00:33:40.176 [2024-11-27 07:28:51.287504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.287536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.176 qpair failed and we were unable to recover it. 00:33:40.176 [2024-11-27 07:28:51.287887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.287916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.176 qpair failed and we were unable to recover it. 00:33:40.176 [2024-11-27 07:28:51.288257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.288287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.176 qpair failed and we were unable to recover it. 00:33:40.176 [2024-11-27 07:28:51.288636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.288666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.176 qpair failed and we were unable to recover it. 00:33:40.176 [2024-11-27 07:28:51.289028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.289057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.176 qpair failed and we were unable to recover it. 00:33:40.176 [2024-11-27 07:28:51.289396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.289426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.176 qpair failed and we were unable to recover it. 00:33:40.176 [2024-11-27 07:28:51.289776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.289805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.176 qpair failed and we were unable to recover it. 00:33:40.176 [2024-11-27 07:28:51.290177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.176 [2024-11-27 07:28:51.290208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.177 qpair failed and we were unable to recover it. 00:33:40.177 [2024-11-27 07:28:51.290572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.177 [2024-11-27 07:28:51.290600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.177 qpair failed and we were unable to recover it. 00:33:40.177 [2024-11-27 07:28:51.290970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.177 [2024-11-27 07:28:51.290999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.177 qpair failed and we were unable to recover it. 00:33:40.177 [2024-11-27 07:28:51.291386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.177 [2024-11-27 07:28:51.291419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.177 qpair failed and we were unable to recover it. 00:33:40.177 [2024-11-27 07:28:51.291627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.177 [2024-11-27 07:28:51.291655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.177 qpair failed and we were unable to recover it. 00:33:40.177 [2024-11-27 07:28:51.292038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.177 [2024-11-27 07:28:51.292067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.177 qpair failed and we were unable to recover it. 00:33:40.177 [2024-11-27 07:28:51.292453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.177 [2024-11-27 07:28:51.292483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.177 qpair failed and we were unable to recover it. 00:33:40.177 [2024-11-27 07:28:51.292824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.177 [2024-11-27 07:28:51.292854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.177 qpair failed and we were unable to recover it. 00:33:40.177 [2024-11-27 07:28:51.293220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.177 [2024-11-27 07:28:51.293250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.177 qpair failed and we were unable to recover it. 00:33:40.177 [2024-11-27 07:28:51.293518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.177 [2024-11-27 07:28:51.293547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.177 qpair failed and we were unable to recover it. 00:33:40.177 [2024-11-27 07:28:51.293913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.177 [2024-11-27 07:28:51.293942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.177 qpair failed and we were unable to recover it. 00:33:40.177 [2024-11-27 07:28:51.294305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.177 [2024-11-27 07:28:51.294336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.177 qpair failed and we were unable to recover it. 00:33:40.177 [2024-11-27 07:28:51.294693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.177 [2024-11-27 07:28:51.294721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.177 qpair failed and we were unable to recover it. 00:33:40.177 [2024-11-27 07:28:51.295035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.177 [2024-11-27 07:28:51.295064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.177 qpair failed and we were unable to recover it. 00:33:40.177 [2024-11-27 07:28:51.295418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.177 [2024-11-27 07:28:51.295449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.177 qpair failed and we were unable to recover it. 00:33:40.177 [2024-11-27 07:28:51.295728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.177 [2024-11-27 07:28:51.295757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.177 qpair failed and we were unable to recover it. 00:33:40.177 [2024-11-27 07:28:51.296108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.177 [2024-11-27 07:28:51.296138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.177 qpair failed and we were unable to recover it. 00:33:40.177 [2024-11-27 07:28:51.296520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.177 [2024-11-27 07:28:51.296550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.177 qpair failed and we were unable to recover it. 00:33:40.177 [2024-11-27 07:28:51.296800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.177 [2024-11-27 07:28:51.296828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.177 qpair failed and we were unable to recover it. 00:33:40.177 [2024-11-27 07:28:51.297183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.177 [2024-11-27 07:28:51.297233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.177 qpair failed and we were unable to recover it. 00:33:40.177 [2024-11-27 07:28:51.297638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.177 [2024-11-27 07:28:51.297667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.177 qpair failed and we were unable to recover it. 00:33:40.177 [2024-11-27 07:28:51.298015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.177 [2024-11-27 07:28:51.298044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.177 qpair failed and we were unable to recover it. 00:33:40.177 [2024-11-27 07:28:51.298428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.177 [2024-11-27 07:28:51.298459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.177 qpair failed and we were unable to recover it. 00:33:40.177 [2024-11-27 07:28:51.298897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.177 [2024-11-27 07:28:51.298925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.177 qpair failed and we were unable to recover it. 00:33:40.177 [2024-11-27 07:28:51.299283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.177 [2024-11-27 07:28:51.299312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.177 qpair failed and we were unable to recover it. 00:33:40.177 [2024-11-27 07:28:51.299674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.177 [2024-11-27 07:28:51.299704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.177 qpair failed and we were unable to recover it. 00:33:40.177 [2024-11-27 07:28:51.300044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.177 [2024-11-27 07:28:51.300071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.177 qpair failed and we were unable to recover it. 00:33:40.177 [2024-11-27 07:28:51.300418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.177 [2024-11-27 07:28:51.300448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.177 qpair failed and we were unable to recover it. 00:33:40.177 [2024-11-27 07:28:51.300858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.177 [2024-11-27 07:28:51.300888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.177 qpair failed and we were unable to recover it. 00:33:40.177 [2024-11-27 07:28:51.301237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.177 [2024-11-27 07:28:51.301266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.177 qpair failed and we were unable to recover it. 00:33:40.177 [2024-11-27 07:28:51.301645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.177 [2024-11-27 07:28:51.301674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.177 qpair failed and we were unable to recover it. 00:33:40.177 [2024-11-27 07:28:51.302039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.177 [2024-11-27 07:28:51.302068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.177 qpair failed and we were unable to recover it. 00:33:40.177 [2024-11-27 07:28:51.302429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.177 [2024-11-27 07:28:51.302465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.177 qpair failed and we were unable to recover it. 00:33:40.177 [2024-11-27 07:28:51.302805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.177 [2024-11-27 07:28:51.302833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.177 qpair failed and we were unable to recover it. 00:33:40.177 [2024-11-27 07:28:51.303204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.177 [2024-11-27 07:28:51.303234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.177 qpair failed and we were unable to recover it. 00:33:40.177 [2024-11-27 07:28:51.303615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.177 [2024-11-27 07:28:51.303644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.177 qpair failed and we were unable to recover it. 00:33:40.177 [2024-11-27 07:28:51.303889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.177 [2024-11-27 07:28:51.303917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.177 qpair failed and we were unable to recover it. 00:33:40.177 [2024-11-27 07:28:51.304182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.177 [2024-11-27 07:28:51.304215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.177 qpair failed and we were unable to recover it. 00:33:40.177 [2024-11-27 07:28:51.304596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.178 [2024-11-27 07:28:51.304625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.178 qpair failed and we were unable to recover it. 00:33:40.178 [2024-11-27 07:28:51.304992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.178 [2024-11-27 07:28:51.305021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.178 qpair failed and we were unable to recover it. 00:33:40.178 [2024-11-27 07:28:51.305392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.178 [2024-11-27 07:28:51.305421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.178 qpair failed and we were unable to recover it. 00:33:40.178 [2024-11-27 07:28:51.305777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.178 [2024-11-27 07:28:51.305806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.178 qpair failed and we were unable to recover it. 00:33:40.178 [2024-11-27 07:28:51.306184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.178 [2024-11-27 07:28:51.306216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.178 qpair failed and we were unable to recover it. 00:33:40.178 [2024-11-27 07:28:51.306561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.178 [2024-11-27 07:28:51.306589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.178 qpair failed and we were unable to recover it. 00:33:40.178 [2024-11-27 07:28:51.306948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.178 [2024-11-27 07:28:51.306976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.178 qpair failed and we were unable to recover it. 00:33:40.178 [2024-11-27 07:28:51.307351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.178 [2024-11-27 07:28:51.307381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.178 qpair failed and we were unable to recover it. 00:33:40.178 [2024-11-27 07:28:51.307778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.178 [2024-11-27 07:28:51.307807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.178 qpair failed and we were unable to recover it. 00:33:40.178 [2024-11-27 07:28:51.308180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.178 [2024-11-27 07:28:51.308211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.178 qpair failed and we were unable to recover it. 00:33:40.178 [2024-11-27 07:28:51.308578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.178 [2024-11-27 07:28:51.308607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.178 qpair failed and we were unable to recover it. 00:33:40.178 [2024-11-27 07:28:51.308972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.178 [2024-11-27 07:28:51.309001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.178 qpair failed and we were unable to recover it. 00:33:40.178 [2024-11-27 07:28:51.309383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.178 [2024-11-27 07:28:51.309413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.178 qpair failed and we were unable to recover it. 00:33:40.178 [2024-11-27 07:28:51.309774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.178 [2024-11-27 07:28:51.309802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.178 qpair failed and we were unable to recover it. 00:33:40.178 [2024-11-27 07:28:51.310046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.178 [2024-11-27 07:28:51.310074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.178 qpair failed and we were unable to recover it. 00:33:40.178 [2024-11-27 07:28:51.310482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.178 [2024-11-27 07:28:51.310513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.178 qpair failed and we were unable to recover it. 00:33:40.178 [2024-11-27 07:28:51.310867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.178 [2024-11-27 07:28:51.310895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.178 qpair failed and we were unable to recover it. 00:33:40.178 [2024-11-27 07:28:51.311233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.178 [2024-11-27 07:28:51.311263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.178 qpair failed and we were unable to recover it. 00:33:40.178 [2024-11-27 07:28:51.311659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.178 [2024-11-27 07:28:51.311689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.178 qpair failed and we were unable to recover it. 00:33:40.178 [2024-11-27 07:28:51.312046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.178 [2024-11-27 07:28:51.312077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.178 qpair failed and we were unable to recover it. 00:33:40.178 [2024-11-27 07:28:51.312427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.178 [2024-11-27 07:28:51.312457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.178 qpair failed and we were unable to recover it. 00:33:40.178 [2024-11-27 07:28:51.312852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.178 [2024-11-27 07:28:51.312887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.178 qpair failed and we were unable to recover it. 00:33:40.178 [2024-11-27 07:28:51.313235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.178 [2024-11-27 07:28:51.313264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.178 qpair failed and we were unable to recover it. 00:33:40.178 [2024-11-27 07:28:51.313669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.178 [2024-11-27 07:28:51.313698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.178 qpair failed and we were unable to recover it. 00:33:40.178 [2024-11-27 07:28:51.314069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.178 [2024-11-27 07:28:51.314098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.178 qpair failed and we were unable to recover it. 00:33:40.178 [2024-11-27 07:28:51.314461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.178 [2024-11-27 07:28:51.314492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.178 qpair failed and we were unable to recover it. 00:33:40.178 [2024-11-27 07:28:51.314844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.178 [2024-11-27 07:28:51.314873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.178 qpair failed and we were unable to recover it. 00:33:40.178 [2024-11-27 07:28:51.315225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.178 [2024-11-27 07:28:51.315254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.178 qpair failed and we were unable to recover it. 00:33:40.178 [2024-11-27 07:28:51.315599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.178 [2024-11-27 07:28:51.315629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.178 qpair failed and we were unable to recover it. 00:33:40.178 [2024-11-27 07:28:51.315971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.178 [2024-11-27 07:28:51.316000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.178 qpair failed and we were unable to recover it. 00:33:40.178 [2024-11-27 07:28:51.316352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.178 [2024-11-27 07:28:51.316382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.178 qpair failed and we were unable to recover it. 00:33:40.178 [2024-11-27 07:28:51.316740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.178 [2024-11-27 07:28:51.316769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.178 qpair failed and we were unable to recover it. 00:33:40.178 [2024-11-27 07:28:51.317131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.178 [2024-11-27 07:28:51.317169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.178 qpair failed and we were unable to recover it. 00:33:40.178 [2024-11-27 07:28:51.317431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.178 [2024-11-27 07:28:51.317459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.178 qpair failed and we were unable to recover it. 00:33:40.178 [2024-11-27 07:28:51.317813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.178 [2024-11-27 07:28:51.317841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.318157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.179 [2024-11-27 07:28:51.318212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.318547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.179 [2024-11-27 07:28:51.318576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.318945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.179 [2024-11-27 07:28:51.318973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.319339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.179 [2024-11-27 07:28:51.319369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.319734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.179 [2024-11-27 07:28:51.319763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.320142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.179 [2024-11-27 07:28:51.320181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.320543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.179 [2024-11-27 07:28:51.320571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.320939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.179 [2024-11-27 07:28:51.320968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.321326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.179 [2024-11-27 07:28:51.321357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.321722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.179 [2024-11-27 07:28:51.321751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.322120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.179 [2024-11-27 07:28:51.322148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.322518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.179 [2024-11-27 07:28:51.322547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.322931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.179 [2024-11-27 07:28:51.322960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.323297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.179 [2024-11-27 07:28:51.323329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.323713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.179 [2024-11-27 07:28:51.323742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.324102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.179 [2024-11-27 07:28:51.324131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.324477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.179 [2024-11-27 07:28:51.324506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.324863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.179 [2024-11-27 07:28:51.324892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.325275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.179 [2024-11-27 07:28:51.325304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.325682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.179 [2024-11-27 07:28:51.325711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.326081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.179 [2024-11-27 07:28:51.326109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.326480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.179 [2024-11-27 07:28:51.326510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.326750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.179 [2024-11-27 07:28:51.326780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.327127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.179 [2024-11-27 07:28:51.327155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.327519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.179 [2024-11-27 07:28:51.327549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.327906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.179 [2024-11-27 07:28:51.327934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.328312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.179 [2024-11-27 07:28:51.328342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.328720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.179 [2024-11-27 07:28:51.328751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.329128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.179 [2024-11-27 07:28:51.329157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.329525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.179 [2024-11-27 07:28:51.329554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.329915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.179 [2024-11-27 07:28:51.329944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.330305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.179 [2024-11-27 07:28:51.330335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.330700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.179 [2024-11-27 07:28:51.330729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.331069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.179 [2024-11-27 07:28:51.331099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.331445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.179 [2024-11-27 07:28:51.331476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.331816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.179 [2024-11-27 07:28:51.331846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.332216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.179 [2024-11-27 07:28:51.332247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.332595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.179 [2024-11-27 07:28:51.332625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.179 qpair failed and we were unable to recover it. 00:33:40.179 [2024-11-27 07:28:51.333047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.180 [2024-11-27 07:28:51.333075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.180 qpair failed and we were unable to recover it. 00:33:40.180 [2024-11-27 07:28:51.333366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.180 [2024-11-27 07:28:51.333395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.180 qpair failed and we were unable to recover it. 00:33:40.180 [2024-11-27 07:28:51.333750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.180 [2024-11-27 07:28:51.333780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.180 qpair failed and we were unable to recover it. 00:33:40.180 [2024-11-27 07:28:51.334142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.180 [2024-11-27 07:28:51.334182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.180 qpair failed and we were unable to recover it. 00:33:40.180 [2024-11-27 07:28:51.334516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.180 [2024-11-27 07:28:51.334545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.180 qpair failed and we were unable to recover it. 00:33:40.180 [2024-11-27 07:28:51.334905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.180 [2024-11-27 07:28:51.334934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.180 qpair failed and we were unable to recover it. 00:33:40.180 [2024-11-27 07:28:51.335300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.180 [2024-11-27 07:28:51.335329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.180 qpair failed and we were unable to recover it. 00:33:40.180 [2024-11-27 07:28:51.335701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.180 [2024-11-27 07:28:51.335730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.180 qpair failed and we were unable to recover it. 00:33:40.180 [2024-11-27 07:28:51.335969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.180 [2024-11-27 07:28:51.336000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.180 qpair failed and we were unable to recover it. 00:33:40.180 [2024-11-27 07:28:51.336357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.180 [2024-11-27 07:28:51.336388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.180 qpair failed and we were unable to recover it. 00:33:40.180 [2024-11-27 07:28:51.336747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.180 [2024-11-27 07:28:51.336775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.180 qpair failed and we were unable to recover it. 00:33:40.180 [2024-11-27 07:28:51.337141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.180 [2024-11-27 07:28:51.337180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.180 qpair failed and we were unable to recover it. 00:33:40.180 [2024-11-27 07:28:51.337430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.180 [2024-11-27 07:28:51.337463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.180 qpair failed and we were unable to recover it. 00:33:40.180 [2024-11-27 07:28:51.337839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.180 [2024-11-27 07:28:51.337867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.180 qpair failed and we were unable to recover it. 00:33:40.180 [2024-11-27 07:28:51.338239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.180 [2024-11-27 07:28:51.338269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.180 qpair failed and we were unable to recover it. 00:33:40.446 [2024-11-27 07:28:51.338653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.446 [2024-11-27 07:28:51.338686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.446 qpair failed and we were unable to recover it. 00:33:40.446 [2024-11-27 07:28:51.339036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.446 [2024-11-27 07:28:51.339071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.446 qpair failed and we were unable to recover it. 00:33:40.446 [2024-11-27 07:28:51.339436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.446 [2024-11-27 07:28:51.339467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.446 qpair failed and we were unable to recover it. 00:33:40.446 [2024-11-27 07:28:51.339828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.446 [2024-11-27 07:28:51.339857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.446 qpair failed and we were unable to recover it. 00:33:40.446 [2024-11-27 07:28:51.340218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.446 [2024-11-27 07:28:51.340250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.446 qpair failed and we were unable to recover it. 00:33:40.446 [2024-11-27 07:28:51.340616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.446 [2024-11-27 07:28:51.340645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.446 qpair failed and we were unable to recover it. 00:33:40.446 [2024-11-27 07:28:51.341020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.446 [2024-11-27 07:28:51.341050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.446 qpair failed and we were unable to recover it. 00:33:40.446 [2024-11-27 07:28:51.341299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.446 [2024-11-27 07:28:51.341329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.446 qpair failed and we were unable to recover it. 00:33:40.446 [2024-11-27 07:28:51.341682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.446 [2024-11-27 07:28:51.341710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.446 qpair failed and we were unable to recover it. 00:33:40.446 [2024-11-27 07:28:51.342067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.446 [2024-11-27 07:28:51.342096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.446 qpair failed and we were unable to recover it. 00:33:40.446 [2024-11-27 07:28:51.342443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.446 [2024-11-27 07:28:51.342473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.446 qpair failed and we were unable to recover it. 00:33:40.446 [2024-11-27 07:28:51.342722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.446 [2024-11-27 07:28:51.342754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.446 qpair failed and we were unable to recover it. 00:33:40.446 [2024-11-27 07:28:51.343108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.446 [2024-11-27 07:28:51.343137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.446 qpair failed and we were unable to recover it. 00:33:40.446 [2024-11-27 07:28:51.343523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.446 [2024-11-27 07:28:51.343553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.446 qpair failed and we were unable to recover it. 00:33:40.446 [2024-11-27 07:28:51.343827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.446 [2024-11-27 07:28:51.343856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.446 qpair failed and we were unable to recover it. 00:33:40.446 [2024-11-27 07:28:51.344216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.446 [2024-11-27 07:28:51.344246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.446 qpair failed and we were unable to recover it. 00:33:40.446 [2024-11-27 07:28:51.344616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.446 [2024-11-27 07:28:51.344647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.446 qpair failed and we were unable to recover it. 00:33:40.446 [2024-11-27 07:28:51.345000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.446 [2024-11-27 07:28:51.345028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.446 qpair failed and we were unable to recover it. 00:33:40.446 [2024-11-27 07:28:51.345389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.446 [2024-11-27 07:28:51.345420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.446 qpair failed and we were unable to recover it. 00:33:40.446 [2024-11-27 07:28:51.345784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.446 [2024-11-27 07:28:51.345813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.446 qpair failed and we were unable to recover it. 00:33:40.446 [2024-11-27 07:28:51.346196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.446 [2024-11-27 07:28:51.346227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.446 qpair failed and we were unable to recover it. 00:33:40.447 [2024-11-27 07:28:51.346478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.447 [2024-11-27 07:28:51.346507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.447 qpair failed and we were unable to recover it. 00:33:40.447 [2024-11-27 07:28:51.346904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.447 [2024-11-27 07:28:51.346932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.447 qpair failed and we were unable to recover it. 00:33:40.447 [2024-11-27 07:28:51.347293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.447 [2024-11-27 07:28:51.347322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.447 qpair failed and we were unable to recover it. 00:33:40.447 [2024-11-27 07:28:51.347676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.447 [2024-11-27 07:28:51.347704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.447 qpair failed and we were unable to recover it. 00:33:40.447 [2024-11-27 07:28:51.348059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.447 [2024-11-27 07:28:51.348087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.447 qpair failed and we were unable to recover it. 00:33:40.447 [2024-11-27 07:28:51.348339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.447 [2024-11-27 07:28:51.348371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.447 qpair failed and we were unable to recover it. 00:33:40.447 [2024-11-27 07:28:51.348732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.447 [2024-11-27 07:28:51.348761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.447 qpair failed and we were unable to recover it. 00:33:40.447 [2024-11-27 07:28:51.349129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.447 [2024-11-27 07:28:51.349174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.447 qpair failed and we were unable to recover it. 00:33:40.447 [2024-11-27 07:28:51.349508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.447 [2024-11-27 07:28:51.349535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.447 qpair failed and we were unable to recover it. 00:33:40.447 [2024-11-27 07:28:51.349891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.447 [2024-11-27 07:28:51.349919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.447 qpair failed and we were unable to recover it. 00:33:40.447 [2024-11-27 07:28:51.350195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.447 [2024-11-27 07:28:51.350224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.447 qpair failed and we were unable to recover it. 00:33:40.447 [2024-11-27 07:28:51.350608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.447 [2024-11-27 07:28:51.350638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.447 qpair failed and we were unable to recover it. 00:33:40.447 [2024-11-27 07:28:51.350977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.447 [2024-11-27 07:28:51.351006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.447 qpair failed and we were unable to recover it. 00:33:40.447 [2024-11-27 07:28:51.351404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.447 [2024-11-27 07:28:51.351434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.447 qpair failed and we were unable to recover it. 00:33:40.447 [2024-11-27 07:28:51.351789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.447 [2024-11-27 07:28:51.351818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.447 qpair failed and we were unable to recover it. 00:33:40.447 [2024-11-27 07:28:51.352263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.447 [2024-11-27 07:28:51.352294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.447 qpair failed and we were unable to recover it. 00:33:40.447 [2024-11-27 07:28:51.352642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.447 [2024-11-27 07:28:51.352673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.447 qpair failed and we were unable to recover it. 00:33:40.447 [2024-11-27 07:28:51.353040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.447 [2024-11-27 07:28:51.353068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.447 qpair failed and we were unable to recover it. 00:33:40.447 [2024-11-27 07:28:51.353438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.447 [2024-11-27 07:28:51.353469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.447 qpair failed and we were unable to recover it. 00:33:40.447 [2024-11-27 07:28:51.353841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.447 [2024-11-27 07:28:51.353870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.447 qpair failed and we were unable to recover it. 00:33:40.447 [2024-11-27 07:28:51.354233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.447 [2024-11-27 07:28:51.354263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.447 qpair failed and we were unable to recover it. 00:33:40.447 [2024-11-27 07:28:51.354631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.447 [2024-11-27 07:28:51.354660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.447 qpair failed and we were unable to recover it. 00:33:40.447 [2024-11-27 07:28:51.355033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.447 [2024-11-27 07:28:51.355061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.447 qpair failed and we were unable to recover it. 00:33:40.447 [2024-11-27 07:28:51.355429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.447 [2024-11-27 07:28:51.355458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.447 qpair failed and we were unable to recover it. 00:33:40.447 [2024-11-27 07:28:51.355812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.447 [2024-11-27 07:28:51.355840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.447 qpair failed and we were unable to recover it. 00:33:40.447 [2024-11-27 07:28:51.356197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.447 [2024-11-27 07:28:51.356228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.447 qpair failed and we were unable to recover it. 00:33:40.447 [2024-11-27 07:28:51.356567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.447 [2024-11-27 07:28:51.356596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.447 qpair failed and we were unable to recover it. 00:33:40.447 [2024-11-27 07:28:51.356960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.447 [2024-11-27 07:28:51.356988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.447 qpair failed and we were unable to recover it. 00:33:40.447 [2024-11-27 07:28:51.357342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.447 [2024-11-27 07:28:51.357372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.447 qpair failed and we were unable to recover it. 00:33:40.448 [2024-11-27 07:28:51.357749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.448 [2024-11-27 07:28:51.357778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.448 qpair failed and we were unable to recover it. 00:33:40.448 [2024-11-27 07:28:51.358142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.448 [2024-11-27 07:28:51.358208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.448 qpair failed and we were unable to recover it. 00:33:40.448 [2024-11-27 07:28:51.358554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.448 [2024-11-27 07:28:51.358585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.448 qpair failed and we were unable to recover it. 00:33:40.448 [2024-11-27 07:28:51.358923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.448 [2024-11-27 07:28:51.358952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.448 qpair failed and we were unable to recover it. 00:33:40.448 [2024-11-27 07:28:51.359328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.448 [2024-11-27 07:28:51.359359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.448 qpair failed and we were unable to recover it. 00:33:40.448 [2024-11-27 07:28:51.359727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.448 [2024-11-27 07:28:51.359761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.448 qpair failed and we were unable to recover it. 00:33:40.448 [2024-11-27 07:28:51.360209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.448 [2024-11-27 07:28:51.360240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.448 qpair failed and we were unable to recover it. 00:33:40.448 [2024-11-27 07:28:51.360611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.448 [2024-11-27 07:28:51.360640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.448 qpair failed and we were unable to recover it. 00:33:40.448 [2024-11-27 07:28:51.360995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.448 [2024-11-27 07:28:51.361023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.448 qpair failed and we were unable to recover it. 00:33:40.448 [2024-11-27 07:28:51.361382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.448 [2024-11-27 07:28:51.361413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.448 qpair failed and we were unable to recover it. 00:33:40.448 [2024-11-27 07:28:51.361770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.448 [2024-11-27 07:28:51.361798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.448 qpair failed and we were unable to recover it. 00:33:40.448 [2024-11-27 07:28:51.362168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.448 [2024-11-27 07:28:51.362197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.448 qpair failed and we were unable to recover it. 00:33:40.448 [2024-11-27 07:28:51.362576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.448 [2024-11-27 07:28:51.362605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.448 qpair failed and we were unable to recover it. 00:33:40.448 [2024-11-27 07:28:51.362974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.448 [2024-11-27 07:28:51.363003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.448 qpair failed and we were unable to recover it. 00:33:40.448 [2024-11-27 07:28:51.363391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.448 [2024-11-27 07:28:51.363420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.448 qpair failed and we were unable to recover it. 00:33:40.448 [2024-11-27 07:28:51.363782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.448 [2024-11-27 07:28:51.363810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.448 qpair failed and we were unable to recover it. 00:33:40.448 [2024-11-27 07:28:51.364194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.448 [2024-11-27 07:28:51.364225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.448 qpair failed and we were unable to recover it. 00:33:40.448 [2024-11-27 07:28:51.364595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.448 [2024-11-27 07:28:51.364624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.448 qpair failed and we were unable to recover it. 00:33:40.448 [2024-11-27 07:28:51.364880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.448 [2024-11-27 07:28:51.364912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.448 qpair failed and we were unable to recover it. 00:33:40.448 [2024-11-27 07:28:51.365318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.448 [2024-11-27 07:28:51.365349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.448 qpair failed and we were unable to recover it. 00:33:40.448 [2024-11-27 07:28:51.365695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.448 [2024-11-27 07:28:51.365723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.448 qpair failed and we were unable to recover it. 00:33:40.448 [2024-11-27 07:28:51.366080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.448 [2024-11-27 07:28:51.366109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.448 qpair failed and we were unable to recover it. 00:33:40.448 [2024-11-27 07:28:51.366554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.448 [2024-11-27 07:28:51.366584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.448 qpair failed and we were unable to recover it. 00:33:40.448 [2024-11-27 07:28:51.366932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.448 [2024-11-27 07:28:51.366962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.448 qpair failed and we were unable to recover it. 00:33:40.448 [2024-11-27 07:28:51.367327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.448 [2024-11-27 07:28:51.367356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.448 qpair failed and we were unable to recover it. 00:33:40.448 [2024-11-27 07:28:51.367698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.448 [2024-11-27 07:28:51.367727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.448 qpair failed and we were unable to recover it. 00:33:40.448 [2024-11-27 07:28:51.368080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.448 [2024-11-27 07:28:51.368109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.448 qpair failed and we were unable to recover it. 00:33:40.448 [2024-11-27 07:28:51.368525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.448 [2024-11-27 07:28:51.368555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.448 qpair failed and we were unable to recover it. 00:33:40.448 [2024-11-27 07:28:51.368888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.448 [2024-11-27 07:28:51.368917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.448 qpair failed and we were unable to recover it. 00:33:40.449 [2024-11-27 07:28:51.369201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.449 [2024-11-27 07:28:51.369231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.449 qpair failed and we were unable to recover it. 00:33:40.449 [2024-11-27 07:28:51.369595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.449 [2024-11-27 07:28:51.369622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.449 qpair failed and we were unable to recover it. 00:33:40.449 [2024-11-27 07:28:51.369973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.449 [2024-11-27 07:28:51.370003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.449 qpair failed and we were unable to recover it. 00:33:40.449 [2024-11-27 07:28:51.370376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.449 [2024-11-27 07:28:51.370406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.449 qpair failed and we were unable to recover it. 00:33:40.449 [2024-11-27 07:28:51.370761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.449 [2024-11-27 07:28:51.370790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.449 qpair failed and we were unable to recover it. 00:33:40.449 [2024-11-27 07:28:51.371185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.449 [2024-11-27 07:28:51.371215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.449 qpair failed and we were unable to recover it. 00:33:40.449 [2024-11-27 07:28:51.371568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.449 [2024-11-27 07:28:51.371597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.449 qpair failed and we were unable to recover it. 00:33:40.449 [2024-11-27 07:28:51.371961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.449 [2024-11-27 07:28:51.371989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.449 qpair failed and we were unable to recover it. 00:33:40.449 [2024-11-27 07:28:51.372388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.449 [2024-11-27 07:28:51.372417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.449 qpair failed and we were unable to recover it. 00:33:40.449 [2024-11-27 07:28:51.372778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.449 [2024-11-27 07:28:51.372815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.449 qpair failed and we were unable to recover it. 00:33:40.449 [2024-11-27 07:28:51.373192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.449 [2024-11-27 07:28:51.373222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.449 qpair failed and we were unable to recover it. 00:33:40.449 [2024-11-27 07:28:51.373593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.449 [2024-11-27 07:28:51.373622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.449 qpair failed and we were unable to recover it. 00:33:40.449 [2024-11-27 07:28:51.373980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.449 [2024-11-27 07:28:51.374015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.449 qpair failed and we were unable to recover it. 00:33:40.449 [2024-11-27 07:28:51.374399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.449 [2024-11-27 07:28:51.374428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.449 qpair failed and we were unable to recover it. 00:33:40.449 [2024-11-27 07:28:51.374783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.449 [2024-11-27 07:28:51.374813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.449 qpair failed and we were unable to recover it. 00:33:40.449 [2024-11-27 07:28:51.375179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.449 [2024-11-27 07:28:51.375210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.449 qpair failed and we were unable to recover it. 00:33:40.449 [2024-11-27 07:28:51.375606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.449 [2024-11-27 07:28:51.375634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.449 qpair failed and we were unable to recover it. 00:33:40.449 [2024-11-27 07:28:51.375989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.449 [2024-11-27 07:28:51.376020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.449 qpair failed and we were unable to recover it. 00:33:40.449 [2024-11-27 07:28:51.376388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.449 [2024-11-27 07:28:51.376418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.449 qpair failed and we were unable to recover it. 00:33:40.449 [2024-11-27 07:28:51.376785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.449 [2024-11-27 07:28:51.376814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.449 qpair failed and we were unable to recover it. 00:33:40.449 [2024-11-27 07:28:51.377180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.449 [2024-11-27 07:28:51.377210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.449 qpair failed and we were unable to recover it. 00:33:40.449 [2024-11-27 07:28:51.377563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.449 [2024-11-27 07:28:51.377592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.449 qpair failed and we were unable to recover it. 00:33:40.449 [2024-11-27 07:28:51.377844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.449 [2024-11-27 07:28:51.377875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.449 qpair failed and we were unable to recover it. 00:33:40.449 [2024-11-27 07:28:51.378246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.449 [2024-11-27 07:28:51.378277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.449 qpair failed and we were unable to recover it. 00:33:40.449 [2024-11-27 07:28:51.378612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.449 [2024-11-27 07:28:51.378641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.449 qpair failed and we were unable to recover it. 00:33:40.449 [2024-11-27 07:28:51.379012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.449 [2024-11-27 07:28:51.379040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.449 qpair failed and we were unable to recover it. 00:33:40.449 [2024-11-27 07:28:51.379396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.449 [2024-11-27 07:28:51.379426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.449 qpair failed and we were unable to recover it. 00:33:40.449 [2024-11-27 07:28:51.379787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.449 [2024-11-27 07:28:51.379816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.449 qpair failed and we were unable to recover it. 00:33:40.449 [2024-11-27 07:28:51.380182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.450 [2024-11-27 07:28:51.380211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.450 qpair failed and we were unable to recover it. 00:33:40.450 [2024-11-27 07:28:51.380691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.450 [2024-11-27 07:28:51.380728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.450 qpair failed and we were unable to recover it. 00:33:40.450 [2024-11-27 07:28:51.381089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.450 [2024-11-27 07:28:51.381123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.450 qpair failed and we were unable to recover it. 00:33:40.450 [2024-11-27 07:28:51.381510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.450 [2024-11-27 07:28:51.381541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.450 qpair failed and we were unable to recover it. 00:33:40.450 [2024-11-27 07:28:51.381915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.450 [2024-11-27 07:28:51.381945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.450 qpair failed and we were unable to recover it. 00:33:40.450 [2024-11-27 07:28:51.382282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.450 [2024-11-27 07:28:51.382313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.450 qpair failed and we were unable to recover it. 00:33:40.450 [2024-11-27 07:28:51.382656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.450 [2024-11-27 07:28:51.382684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.450 qpair failed and we were unable to recover it. 00:33:40.450 [2024-11-27 07:28:51.382945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.450 [2024-11-27 07:28:51.382973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.450 qpair failed and we were unable to recover it. 00:33:40.450 [2024-11-27 07:28:51.383350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.450 [2024-11-27 07:28:51.383381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.450 qpair failed and we were unable to recover it. 00:33:40.450 [2024-11-27 07:28:51.383748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.450 [2024-11-27 07:28:51.383777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.450 qpair failed and we were unable to recover it. 00:33:40.450 [2024-11-27 07:28:51.384126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.450 [2024-11-27 07:28:51.384154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.450 qpair failed and we were unable to recover it. 00:33:40.450 [2024-11-27 07:28:51.384532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.450 [2024-11-27 07:28:51.384561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.450 qpair failed and we were unable to recover it. 00:33:40.450 [2024-11-27 07:28:51.384919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.450 [2024-11-27 07:28:51.384948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.450 qpair failed and we were unable to recover it. 00:33:40.450 [2024-11-27 07:28:51.385312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.450 [2024-11-27 07:28:51.385342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.450 qpair failed and we were unable to recover it. 00:33:40.450 [2024-11-27 07:28:51.385706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.450 [2024-11-27 07:28:51.385734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.450 qpair failed and we were unable to recover it. 00:33:40.450 [2024-11-27 07:28:51.386106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.450 [2024-11-27 07:28:51.386135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.450 qpair failed and we were unable to recover it. 00:33:40.450 [2024-11-27 07:28:51.386500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.450 [2024-11-27 07:28:51.386536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.450 qpair failed and we were unable to recover it. 00:33:40.450 [2024-11-27 07:28:51.386792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.450 [2024-11-27 07:28:51.386820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.450 qpair failed and we were unable to recover it. 00:33:40.450 [2024-11-27 07:28:51.387183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.450 [2024-11-27 07:28:51.387214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.450 qpair failed and we were unable to recover it. 00:33:40.450 [2024-11-27 07:28:51.387575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.450 [2024-11-27 07:28:51.387604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.450 qpair failed and we were unable to recover it. 00:33:40.450 [2024-11-27 07:28:51.387962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.450 [2024-11-27 07:28:51.387991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.450 qpair failed and we were unable to recover it. 00:33:40.450 [2024-11-27 07:28:51.388338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.450 [2024-11-27 07:28:51.388368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.450 qpair failed and we were unable to recover it. 00:33:40.450 [2024-11-27 07:28:51.388730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.450 [2024-11-27 07:28:51.388759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.450 qpair failed and we were unable to recover it. 00:33:40.450 [2024-11-27 07:28:51.389093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.450 [2024-11-27 07:28:51.389122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.450 qpair failed and we were unable to recover it. 00:33:40.450 [2024-11-27 07:28:51.389396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.450 [2024-11-27 07:28:51.389426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.450 qpair failed and we were unable to recover it. 00:33:40.450 [2024-11-27 07:28:51.389772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.450 [2024-11-27 07:28:51.389801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.450 qpair failed and we were unable to recover it. 00:33:40.450 [2024-11-27 07:28:51.390203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.450 [2024-11-27 07:28:51.390234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.450 qpair failed and we were unable to recover it. 00:33:40.450 [2024-11-27 07:28:51.390623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.450 [2024-11-27 07:28:51.390653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.450 qpair failed and we were unable to recover it. 00:33:40.450 [2024-11-27 07:28:51.391011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.450 [2024-11-27 07:28:51.391040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.450 qpair failed and we were unable to recover it. 00:33:40.450 [2024-11-27 07:28:51.391391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.450 [2024-11-27 07:28:51.391421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.450 qpair failed and we were unable to recover it. 00:33:40.450 [2024-11-27 07:28:51.391786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.451 [2024-11-27 07:28:51.391815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.451 qpair failed and we were unable to recover it. 00:33:40.451 [2024-11-27 07:28:51.392070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.451 [2024-11-27 07:28:51.392098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.451 qpair failed and we were unable to recover it. 00:33:40.451 [2024-11-27 07:28:51.392498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.451 [2024-11-27 07:28:51.392529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.451 qpair failed and we were unable to recover it. 00:33:40.451 [2024-11-27 07:28:51.392774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.451 [2024-11-27 07:28:51.392803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.451 qpair failed and we were unable to recover it. 00:33:40.451 [2024-11-27 07:28:51.393151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.451 [2024-11-27 07:28:51.393191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.451 qpair failed and we were unable to recover it. 00:33:40.451 [2024-11-27 07:28:51.393432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.451 [2024-11-27 07:28:51.393463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.451 qpair failed and we were unable to recover it. 00:33:40.451 [2024-11-27 07:28:51.393827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.451 [2024-11-27 07:28:51.393855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.451 qpair failed and we were unable to recover it. 00:33:40.451 [2024-11-27 07:28:51.394191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.451 [2024-11-27 07:28:51.394220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.451 qpair failed and we were unable to recover it. 00:33:40.451 [2024-11-27 07:28:51.394574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.451 [2024-11-27 07:28:51.394603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.451 qpair failed and we were unable to recover it. 00:33:40.451 [2024-11-27 07:28:51.394963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.451 [2024-11-27 07:28:51.394993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.451 qpair failed and we were unable to recover it. 00:33:40.451 [2024-11-27 07:28:51.395359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.451 [2024-11-27 07:28:51.395390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.451 qpair failed and we were unable to recover it. 00:33:40.451 [2024-11-27 07:28:51.395757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.451 [2024-11-27 07:28:51.395785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.451 qpair failed and we were unable to recover it. 00:33:40.451 [2024-11-27 07:28:51.396145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.451 [2024-11-27 07:28:51.396187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.451 qpair failed and we were unable to recover it. 00:33:40.451 [2024-11-27 07:28:51.396546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.451 [2024-11-27 07:28:51.396581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.451 qpair failed and we were unable to recover it. 00:33:40.451 [2024-11-27 07:28:51.396915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.451 [2024-11-27 07:28:51.396945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.451 qpair failed and we were unable to recover it. 00:33:40.451 [2024-11-27 07:28:51.397322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.451 [2024-11-27 07:28:51.397352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.451 qpair failed and we were unable to recover it. 00:33:40.451 [2024-11-27 07:28:51.397710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.451 [2024-11-27 07:28:51.397739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.451 qpair failed and we were unable to recover it. 00:33:40.451 [2024-11-27 07:28:51.398101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.451 [2024-11-27 07:28:51.398129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.451 qpair failed and we were unable to recover it. 00:33:40.451 [2024-11-27 07:28:51.398440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.451 [2024-11-27 07:28:51.398470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.451 qpair failed and we were unable to recover it. 00:33:40.451 [2024-11-27 07:28:51.398829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.451 [2024-11-27 07:28:51.398859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.451 qpair failed and we were unable to recover it. 00:33:40.451 [2024-11-27 07:28:51.399226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.451 [2024-11-27 07:28:51.399255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.451 qpair failed and we were unable to recover it. 00:33:40.451 [2024-11-27 07:28:51.399550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.451 [2024-11-27 07:28:51.399578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.451 qpair failed and we were unable to recover it. 00:33:40.451 [2024-11-27 07:28:51.399932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.451 [2024-11-27 07:28:51.399961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.451 qpair failed and we were unable to recover it. 00:33:40.451 [2024-11-27 07:28:51.400325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.451 [2024-11-27 07:28:51.400355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.451 qpair failed and we were unable to recover it. 00:33:40.451 [2024-11-27 07:28:51.400718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.451 [2024-11-27 07:28:51.400746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.451 qpair failed and we were unable to recover it. 00:33:40.451 [2024-11-27 07:28:51.401106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.451 [2024-11-27 07:28:51.401135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.451 qpair failed and we were unable to recover it. 00:33:40.451 [2024-11-27 07:28:51.401615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.451 [2024-11-27 07:28:51.401645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.451 qpair failed and we were unable to recover it. 00:33:40.451 [2024-11-27 07:28:51.401900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.451 [2024-11-27 07:28:51.401928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.451 qpair failed and we were unable to recover it. 00:33:40.452 [2024-11-27 07:28:51.402277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.452 [2024-11-27 07:28:51.402308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.452 qpair failed and we were unable to recover it. 00:33:40.452 [2024-11-27 07:28:51.402691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.452 [2024-11-27 07:28:51.402720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.452 qpair failed and we were unable to recover it. 00:33:40.452 [2024-11-27 07:28:51.403068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.452 [2024-11-27 07:28:51.403098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.452 qpair failed and we were unable to recover it. 00:33:40.452 [2024-11-27 07:28:51.403443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.452 [2024-11-27 07:28:51.403473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.452 qpair failed and we were unable to recover it. 00:33:40.452 [2024-11-27 07:28:51.403827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.452 [2024-11-27 07:28:51.403856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.452 qpair failed and we were unable to recover it. 00:33:40.452 [2024-11-27 07:28:51.404139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.452 [2024-11-27 07:28:51.404179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.452 qpair failed and we were unable to recover it. 00:33:40.452 [2024-11-27 07:28:51.404544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.452 [2024-11-27 07:28:51.404573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.452 qpair failed and we were unable to recover it. 00:33:40.452 [2024-11-27 07:28:51.404948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.452 [2024-11-27 07:28:51.404976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.452 qpair failed and we were unable to recover it. 00:33:40.452 [2024-11-27 07:28:51.405336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.452 [2024-11-27 07:28:51.405365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.452 qpair failed and we were unable to recover it. 00:33:40.452 [2024-11-27 07:28:51.405726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.452 [2024-11-27 07:28:51.405754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.452 qpair failed and we were unable to recover it. 00:33:40.452 [2024-11-27 07:28:51.406022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.452 [2024-11-27 07:28:51.406050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.452 qpair failed and we were unable to recover it. 00:33:40.452 [2024-11-27 07:28:51.406457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.452 [2024-11-27 07:28:51.406487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.452 qpair failed and we were unable to recover it. 00:33:40.452 [2024-11-27 07:28:51.406841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.452 [2024-11-27 07:28:51.406869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.452 qpair failed and we were unable to recover it. 00:33:40.452 [2024-11-27 07:28:51.407236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.452 [2024-11-27 07:28:51.407264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.452 qpair failed and we were unable to recover it. 00:33:40.452 [2024-11-27 07:28:51.407613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.452 [2024-11-27 07:28:51.407643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.452 qpair failed and we were unable to recover it. 00:33:40.452 [2024-11-27 07:28:51.408010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.452 [2024-11-27 07:28:51.408038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.452 qpair failed and we were unable to recover it. 00:33:40.452 [2024-11-27 07:28:51.408398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.452 [2024-11-27 07:28:51.408430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.453 qpair failed and we were unable to recover it. 00:33:40.453 [2024-11-27 07:28:51.408769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.453 [2024-11-27 07:28:51.408798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.453 qpair failed and we were unable to recover it. 00:33:40.453 [2024-11-27 07:28:51.409171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.453 [2024-11-27 07:28:51.409201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.453 qpair failed and we were unable to recover it. 00:33:40.453 [2024-11-27 07:28:51.409548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.453 [2024-11-27 07:28:51.409575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.453 qpair failed and we were unable to recover it. 00:33:40.453 [2024-11-27 07:28:51.409968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.453 [2024-11-27 07:28:51.409996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.453 qpair failed and we were unable to recover it. 00:33:40.453 [2024-11-27 07:28:51.410287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.453 [2024-11-27 07:28:51.410318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.453 qpair failed and we were unable to recover it. 00:33:40.453 [2024-11-27 07:28:51.410685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.453 [2024-11-27 07:28:51.410715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.453 qpair failed and we were unable to recover it. 00:33:40.453 [2024-11-27 07:28:51.411079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.453 [2024-11-27 07:28:51.411108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.453 qpair failed and we were unable to recover it. 00:33:40.453 [2024-11-27 07:28:51.411489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.453 [2024-11-27 07:28:51.411519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.453 qpair failed and we were unable to recover it. 00:33:40.453 [2024-11-27 07:28:51.411884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.453 [2024-11-27 07:28:51.411913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.453 qpair failed and we were unable to recover it. 00:33:40.453 [2024-11-27 07:28:51.412280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.453 [2024-11-27 07:28:51.412311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.453 qpair failed and we were unable to recover it. 00:33:40.453 [2024-11-27 07:28:51.412664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.453 [2024-11-27 07:28:51.412693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.453 qpair failed and we were unable to recover it. 00:33:40.453 [2024-11-27 07:28:51.413125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.453 [2024-11-27 07:28:51.413154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.453 qpair failed and we were unable to recover it. 00:33:40.453 [2024-11-27 07:28:51.413393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.453 [2024-11-27 07:28:51.413425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.453 qpair failed and we were unable to recover it. 00:33:40.453 [2024-11-27 07:28:51.413714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.453 [2024-11-27 07:28:51.413743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.453 qpair failed and we were unable to recover it. 00:33:40.453 [2024-11-27 07:28:51.414117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.453 [2024-11-27 07:28:51.414147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.453 qpair failed and we were unable to recover it. 00:33:40.453 [2024-11-27 07:28:51.414505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.453 [2024-11-27 07:28:51.414535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.453 qpair failed and we were unable to recover it. 00:33:40.453 [2024-11-27 07:28:51.414899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.453 [2024-11-27 07:28:51.414928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.453 qpair failed and we were unable to recover it. 00:33:40.453 [2024-11-27 07:28:51.415299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.453 [2024-11-27 07:28:51.415330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.453 qpair failed and we were unable to recover it. 00:33:40.453 [2024-11-27 07:28:51.415697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.453 [2024-11-27 07:28:51.415726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.453 qpair failed and we were unable to recover it. 00:33:40.453 [2024-11-27 07:28:51.416086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.453 [2024-11-27 07:28:51.416114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.453 qpair failed and we were unable to recover it. 00:33:40.453 [2024-11-27 07:28:51.416480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.453 [2024-11-27 07:28:51.416510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.453 qpair failed and we were unable to recover it. 00:33:40.453 [2024-11-27 07:28:51.416871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.453 [2024-11-27 07:28:51.416899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.453 qpair failed and we were unable to recover it. 00:33:40.453 [2024-11-27 07:28:51.417333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.453 [2024-11-27 07:28:51.417363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.453 qpair failed and we were unable to recover it. 00:33:40.453 [2024-11-27 07:28:51.417702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.453 [2024-11-27 07:28:51.417733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.453 qpair failed and we were unable to recover it. 00:33:40.453 [2024-11-27 07:28:51.418099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.453 [2024-11-27 07:28:51.418127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.453 qpair failed and we were unable to recover it. 00:33:40.453 [2024-11-27 07:28:51.418526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.453 [2024-11-27 07:28:51.418556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.453 qpair failed and we were unable to recover it. 00:33:40.453 [2024-11-27 07:28:51.418915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.453 [2024-11-27 07:28:51.418944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.453 qpair failed and we were unable to recover it. 00:33:40.453 [2024-11-27 07:28:51.419216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.453 [2024-11-27 07:28:51.419246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.453 qpair failed and we were unable to recover it. 00:33:40.453 [2024-11-27 07:28:51.419610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.453 [2024-11-27 07:28:51.419638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.453 qpair failed and we were unable to recover it. 00:33:40.453 [2024-11-27 07:28:51.419990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.453 [2024-11-27 07:28:51.420017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.453 qpair failed and we were unable to recover it. 00:33:40.454 [2024-11-27 07:28:51.420380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.454 [2024-11-27 07:28:51.420410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.454 qpair failed and we were unable to recover it. 00:33:40.454 [2024-11-27 07:28:51.420777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.454 [2024-11-27 07:28:51.420806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.454 qpair failed and we were unable to recover it. 00:33:40.454 [2024-11-27 07:28:51.421188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.454 [2024-11-27 07:28:51.421217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.454 qpair failed and we were unable to recover it. 00:33:40.454 [2024-11-27 07:28:51.421572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.454 [2024-11-27 07:28:51.421601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.454 qpair failed and we were unable to recover it. 00:33:40.454 [2024-11-27 07:28:51.421976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.454 [2024-11-27 07:28:51.422005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.454 qpair failed and we were unable to recover it. 00:33:40.454 [2024-11-27 07:28:51.422433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.454 [2024-11-27 07:28:51.422462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.454 qpair failed and we were unable to recover it. 00:33:40.454 [2024-11-27 07:28:51.422830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.454 [2024-11-27 07:28:51.422865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.454 qpair failed and we were unable to recover it. 00:33:40.454 [2024-11-27 07:28:51.423213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.454 [2024-11-27 07:28:51.423242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.454 qpair failed and we were unable to recover it. 00:33:40.454 [2024-11-27 07:28:51.423586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.454 [2024-11-27 07:28:51.423614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.454 qpair failed and we were unable to recover it. 00:33:40.454 [2024-11-27 07:28:51.423972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.454 [2024-11-27 07:28:51.424002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.454 qpair failed and we were unable to recover it. 00:33:40.454 [2024-11-27 07:28:51.424383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.454 [2024-11-27 07:28:51.424412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.454 qpair failed and we were unable to recover it. 00:33:40.454 [2024-11-27 07:28:51.424782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.454 [2024-11-27 07:28:51.424810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.454 qpair failed and we were unable to recover it. 00:33:40.454 [2024-11-27 07:28:51.425178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.454 [2024-11-27 07:28:51.425207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.454 qpair failed and we were unable to recover it. 00:33:40.454 [2024-11-27 07:28:51.425606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.454 [2024-11-27 07:28:51.425634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.454 qpair failed and we were unable to recover it. 00:33:40.454 [2024-11-27 07:28:51.426008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.454 [2024-11-27 07:28:51.426036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.454 qpair failed and we were unable to recover it. 00:33:40.454 [2024-11-27 07:28:51.426411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.454 [2024-11-27 07:28:51.426440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.454 qpair failed and we were unable to recover it. 00:33:40.454 [2024-11-27 07:28:51.426790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.454 [2024-11-27 07:28:51.426818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.454 qpair failed and we were unable to recover it. 00:33:40.454 [2024-11-27 07:28:51.427197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.454 [2024-11-27 07:28:51.427228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.454 qpair failed and we were unable to recover it. 00:33:40.454 [2024-11-27 07:28:51.427590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.454 [2024-11-27 07:28:51.427619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.454 qpair failed and we were unable to recover it. 00:33:40.454 [2024-11-27 07:28:51.427969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.454 [2024-11-27 07:28:51.427997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.454 qpair failed and we were unable to recover it. 00:33:40.454 [2024-11-27 07:28:51.428384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.454 [2024-11-27 07:28:51.428414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.454 qpair failed and we were unable to recover it. 00:33:40.454 [2024-11-27 07:28:51.428768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.454 [2024-11-27 07:28:51.428796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.454 qpair failed and we were unable to recover it. 00:33:40.454 [2024-11-27 07:28:51.429168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.454 [2024-11-27 07:28:51.429198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.454 qpair failed and we were unable to recover it. 00:33:40.454 [2024-11-27 07:28:51.429475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.454 [2024-11-27 07:28:51.429503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.454 qpair failed and we were unable to recover it. 00:33:40.454 [2024-11-27 07:28:51.429843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.454 [2024-11-27 07:28:51.429872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.454 qpair failed and we were unable to recover it. 00:33:40.454 [2024-11-27 07:28:51.430247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.454 [2024-11-27 07:28:51.430277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.454 qpair failed and we were unable to recover it. 00:33:40.454 [2024-11-27 07:28:51.430637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.454 [2024-11-27 07:28:51.430666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.454 qpair failed and we were unable to recover it. 00:33:40.454 [2024-11-27 07:28:51.430908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.454 [2024-11-27 07:28:51.430940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.454 qpair failed and we were unable to recover it. 00:33:40.454 [2024-11-27 07:28:51.431308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.454 [2024-11-27 07:28:51.431339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.455 qpair failed and we were unable to recover it. 00:33:40.455 [2024-11-27 07:28:51.431696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.455 [2024-11-27 07:28:51.431725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.455 qpair failed and we were unable to recover it. 00:33:40.455 [2024-11-27 07:28:51.432091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.455 [2024-11-27 07:28:51.432120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.455 qpair failed and we were unable to recover it. 00:33:40.455 [2024-11-27 07:28:51.432493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.455 [2024-11-27 07:28:51.432523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.455 qpair failed and we were unable to recover it. 00:33:40.455 [2024-11-27 07:28:51.432884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.455 [2024-11-27 07:28:51.432913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.455 qpair failed and we were unable to recover it. 00:33:40.455 [2024-11-27 07:28:51.433286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.455 [2024-11-27 07:28:51.433322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.455 qpair failed and we were unable to recover it. 00:33:40.455 [2024-11-27 07:28:51.433703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.455 [2024-11-27 07:28:51.433732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.455 qpair failed and we were unable to recover it. 00:33:40.455 [2024-11-27 07:28:51.434091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.455 [2024-11-27 07:28:51.434119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.455 qpair failed and we were unable to recover it. 00:33:40.455 [2024-11-27 07:28:51.434501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.455 [2024-11-27 07:28:51.434531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.455 qpair failed and we were unable to recover it. 00:33:40.455 [2024-11-27 07:28:51.434896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.455 [2024-11-27 07:28:51.434924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.455 qpair failed and we were unable to recover it. 00:33:40.455 [2024-11-27 07:28:51.435295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.455 [2024-11-27 07:28:51.435326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.455 qpair failed and we were unable to recover it. 00:33:40.455 [2024-11-27 07:28:51.435683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.455 [2024-11-27 07:28:51.435710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.455 qpair failed and we were unable to recover it. 00:33:40.455 [2024-11-27 07:28:51.436076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.455 [2024-11-27 07:28:51.436103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.455 qpair failed and we were unable to recover it. 00:33:40.455 [2024-11-27 07:28:51.436462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.455 [2024-11-27 07:28:51.436491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.455 qpair failed and we were unable to recover it. 00:33:40.455 [2024-11-27 07:28:51.436727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.455 [2024-11-27 07:28:51.436754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.455 qpair failed and we were unable to recover it. 00:33:40.455 [2024-11-27 07:28:51.437112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.455 [2024-11-27 07:28:51.437140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.455 qpair failed and we were unable to recover it. 00:33:40.455 [2024-11-27 07:28:51.437504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.455 [2024-11-27 07:28:51.437533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.455 qpair failed and we were unable to recover it. 00:33:40.455 [2024-11-27 07:28:51.437903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.455 [2024-11-27 07:28:51.437931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.455 qpair failed and we were unable to recover it. 00:33:40.455 [2024-11-27 07:28:51.438295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.455 [2024-11-27 07:28:51.438324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.455 qpair failed and we were unable to recover it. 00:33:40.455 [2024-11-27 07:28:51.438703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.455 [2024-11-27 07:28:51.438730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.455 qpair failed and we were unable to recover it. 00:33:40.455 [2024-11-27 07:28:51.439092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.455 [2024-11-27 07:28:51.439119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.455 qpair failed and we were unable to recover it. 00:33:40.455 [2024-11-27 07:28:51.439364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.455 [2024-11-27 07:28:51.439396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.455 qpair failed and we were unable to recover it. 00:33:40.455 [2024-11-27 07:28:51.439769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.455 [2024-11-27 07:28:51.439796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.455 qpair failed and we were unable to recover it. 00:33:40.455 [2024-11-27 07:28:51.440181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.455 [2024-11-27 07:28:51.440211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.455 qpair failed and we were unable to recover it. 00:33:40.455 [2024-11-27 07:28:51.440576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.455 [2024-11-27 07:28:51.440605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.455 qpair failed and we were unable to recover it. 00:33:40.455 [2024-11-27 07:28:51.440958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.455 [2024-11-27 07:28:51.440985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.455 qpair failed and we were unable to recover it. 00:33:40.455 [2024-11-27 07:28:51.441351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.455 [2024-11-27 07:28:51.441381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.455 qpair failed and we were unable to recover it. 00:33:40.455 [2024-11-27 07:28:51.441787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.455 [2024-11-27 07:28:51.441816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.455 qpair failed and we were unable to recover it. 00:33:40.455 [2024-11-27 07:28:51.442195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.455 [2024-11-27 07:28:51.442226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.455 qpair failed and we were unable to recover it. 00:33:40.455 [2024-11-27 07:28:51.442618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.455 [2024-11-27 07:28:51.442646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.455 qpair failed and we were unable to recover it. 00:33:40.455 [2024-11-27 07:28:51.443009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.456 [2024-11-27 07:28:51.443036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.456 qpair failed and we were unable to recover it. 00:33:40.456 [2024-11-27 07:28:51.443374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.456 [2024-11-27 07:28:51.443403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.456 qpair failed and we were unable to recover it. 00:33:40.456 [2024-11-27 07:28:51.443689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.456 [2024-11-27 07:28:51.443722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.456 qpair failed and we were unable to recover it. 00:33:40.456 [2024-11-27 07:28:51.444090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.456 [2024-11-27 07:28:51.444119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.456 qpair failed and we were unable to recover it. 00:33:40.456 [2024-11-27 07:28:51.444508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.456 [2024-11-27 07:28:51.444538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.456 qpair failed and we were unable to recover it. 00:33:40.456 [2024-11-27 07:28:51.444953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.456 [2024-11-27 07:28:51.444981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.456 qpair failed and we were unable to recover it. 00:33:40.456 [2024-11-27 07:28:51.445328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.456 [2024-11-27 07:28:51.445358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.456 qpair failed and we were unable to recover it. 00:33:40.456 [2024-11-27 07:28:51.445724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.456 [2024-11-27 07:28:51.445754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.456 qpair failed and we were unable to recover it. 00:33:40.456 [2024-11-27 07:28:51.446131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.456 [2024-11-27 07:28:51.446170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.456 qpair failed and we were unable to recover it. 00:33:40.456 [2024-11-27 07:28:51.446505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.456 [2024-11-27 07:28:51.446534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.456 qpair failed and we were unable to recover it. 00:33:40.456 [2024-11-27 07:28:51.446896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.456 [2024-11-27 07:28:51.446925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.456 qpair failed and we were unable to recover it. 00:33:40.456 [2024-11-27 07:28:51.447175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.456 [2024-11-27 07:28:51.447205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.456 qpair failed and we were unable to recover it. 00:33:40.456 [2024-11-27 07:28:51.447605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.456 [2024-11-27 07:28:51.447633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.456 qpair failed and we were unable to recover it. 00:33:40.456 [2024-11-27 07:28:51.447986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.456 [2024-11-27 07:28:51.448021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.456 qpair failed and we were unable to recover it. 00:33:40.456 [2024-11-27 07:28:51.448388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.456 [2024-11-27 07:28:51.448419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.456 qpair failed and we were unable to recover it. 00:33:40.456 [2024-11-27 07:28:51.448784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.456 [2024-11-27 07:28:51.448812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.456 qpair failed and we were unable to recover it. 00:33:40.456 [2024-11-27 07:28:51.449196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.456 [2024-11-27 07:28:51.449226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.456 qpair failed and we were unable to recover it. 00:33:40.456 [2024-11-27 07:28:51.449635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.456 [2024-11-27 07:28:51.449665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.456 qpair failed and we were unable to recover it. 00:33:40.456 [2024-11-27 07:28:51.450061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.456 [2024-11-27 07:28:51.450090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.456 qpair failed and we were unable to recover it. 00:33:40.456 [2024-11-27 07:28:51.450462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.456 [2024-11-27 07:28:51.450491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.456 qpair failed and we were unable to recover it. 00:33:40.456 [2024-11-27 07:28:51.450735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.456 [2024-11-27 07:28:51.450767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.456 qpair failed and we were unable to recover it. 00:33:40.456 [2024-11-27 07:28:51.451150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.456 [2024-11-27 07:28:51.451208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.456 qpair failed and we were unable to recover it. 00:33:40.456 [2024-11-27 07:28:51.451574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.456 [2024-11-27 07:28:51.451604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.456 qpair failed and we were unable to recover it. 00:33:40.456 [2024-11-27 07:28:51.451970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.456 [2024-11-27 07:28:51.452000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.456 qpair failed and we were unable to recover it. 00:33:40.456 [2024-11-27 07:28:51.452375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.456 [2024-11-27 07:28:51.452404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.456 qpair failed and we were unable to recover it. 00:33:40.456 [2024-11-27 07:28:51.452764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.456 [2024-11-27 07:28:51.452795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.456 qpair failed and we were unable to recover it. 00:33:40.456 [2024-11-27 07:28:51.453178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.456 [2024-11-27 07:28:51.453207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.456 qpair failed and we were unable to recover it. 00:33:40.456 [2024-11-27 07:28:51.453577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.456 [2024-11-27 07:28:51.453606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.456 qpair failed and we were unable to recover it. 00:33:40.456 [2024-11-27 07:28:51.453967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.456 [2024-11-27 07:28:51.453996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.456 qpair failed and we were unable to recover it. 00:33:40.456 [2024-11-27 07:28:51.454246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.457 [2024-11-27 07:28:51.454275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.457 qpair failed and we were unable to recover it. 00:33:40.457 [2024-11-27 07:28:51.454650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.457 [2024-11-27 07:28:51.454678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.457 qpair failed and we were unable to recover it. 00:33:40.457 [2024-11-27 07:28:51.455042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.457 [2024-11-27 07:28:51.455071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.457 qpair failed and we were unable to recover it. 00:33:40.457 [2024-11-27 07:28:51.455457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.457 [2024-11-27 07:28:51.455488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.457 qpair failed and we were unable to recover it. 00:33:40.457 [2024-11-27 07:28:51.455843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.457 [2024-11-27 07:28:51.455871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.457 qpair failed and we were unable to recover it. 00:33:40.457 [2024-11-27 07:28:51.456233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.457 [2024-11-27 07:28:51.456263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.457 qpair failed and we were unable to recover it. 00:33:40.457 [2024-11-27 07:28:51.456672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.457 [2024-11-27 07:28:51.456700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.457 qpair failed and we were unable to recover it. 00:33:40.457 [2024-11-27 07:28:51.457071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.457 [2024-11-27 07:28:51.457098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.457 qpair failed and we were unable to recover it. 00:33:40.457 [2024-11-27 07:28:51.457484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.457 [2024-11-27 07:28:51.457512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.457 qpair failed and we were unable to recover it. 00:33:40.457 [2024-11-27 07:28:51.457861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.457 [2024-11-27 07:28:51.457889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.457 qpair failed and we were unable to recover it. 00:33:40.457 [2024-11-27 07:28:51.458235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.457 [2024-11-27 07:28:51.458264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.457 qpair failed and we were unable to recover it. 00:33:40.457 [2024-11-27 07:28:51.458665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.457 [2024-11-27 07:28:51.458693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.457 qpair failed and we were unable to recover it. 00:33:40.457 [2024-11-27 07:28:51.459051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.457 [2024-11-27 07:28:51.459079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.457 qpair failed and we were unable to recover it. 00:33:40.457 [2024-11-27 07:28:51.459413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.457 [2024-11-27 07:28:51.459444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.457 qpair failed and we were unable to recover it. 00:33:40.457 [2024-11-27 07:28:51.459690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.457 [2024-11-27 07:28:51.459728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.457 qpair failed and we were unable to recover it. 00:33:40.457 [2024-11-27 07:28:51.460112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.457 [2024-11-27 07:28:51.460140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.457 qpair failed and we were unable to recover it. 00:33:40.457 [2024-11-27 07:28:51.460547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.457 [2024-11-27 07:28:51.460578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.457 qpair failed and we were unable to recover it. 00:33:40.457 [2024-11-27 07:28:51.460941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.457 [2024-11-27 07:28:51.460971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.457 qpair failed and we were unable to recover it. 00:33:40.457 [2024-11-27 07:28:51.461344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.457 [2024-11-27 07:28:51.461374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.457 qpair failed and we were unable to recover it. 00:33:40.457 [2024-11-27 07:28:51.461732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.457 [2024-11-27 07:28:51.461761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.457 qpair failed and we were unable to recover it. 00:33:40.457 [2024-11-27 07:28:51.462125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.457 [2024-11-27 07:28:51.462155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.457 qpair failed and we were unable to recover it. 00:33:40.457 [2024-11-27 07:28:51.462546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.457 [2024-11-27 07:28:51.462574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.457 qpair failed and we were unable to recover it. 00:33:40.457 [2024-11-27 07:28:51.462999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.457 [2024-11-27 07:28:51.463026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.457 qpair failed and we were unable to recover it. 00:33:40.457 [2024-11-27 07:28:51.463330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.457 [2024-11-27 07:28:51.463360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.457 qpair failed and we were unable to recover it. 00:33:40.457 [2024-11-27 07:28:51.463741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.457 [2024-11-27 07:28:51.463768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.457 qpair failed and we were unable to recover it. 00:33:40.457 [2024-11-27 07:28:51.464138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.457 [2024-11-27 07:28:51.464180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.457 qpair failed and we were unable to recover it. 00:33:40.457 [2024-11-27 07:28:51.464515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.457 [2024-11-27 07:28:51.464543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.457 qpair failed and we were unable to recover it. 00:33:40.457 [2024-11-27 07:28:51.464907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.457 [2024-11-27 07:28:51.464937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.457 qpair failed and we were unable to recover it. 00:33:40.457 [2024-11-27 07:28:51.465327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.457 [2024-11-27 07:28:51.465358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.457 qpair failed and we were unable to recover it. 00:33:40.457 [2024-11-27 07:28:51.465764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.457 [2024-11-27 07:28:51.465792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.457 qpair failed and we were unable to recover it. 00:33:40.457 [2024-11-27 07:28:51.466149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.457 [2024-11-27 07:28:51.466190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.457 qpair failed and we were unable to recover it. 00:33:40.458 [2024-11-27 07:28:51.466520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.458 [2024-11-27 07:28:51.466550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.458 qpair failed and we were unable to recover it. 00:33:40.458 [2024-11-27 07:28:51.466893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.458 [2024-11-27 07:28:51.466921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.458 qpair failed and we were unable to recover it. 00:33:40.458 [2024-11-27 07:28:51.467280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.458 [2024-11-27 07:28:51.467311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.458 qpair failed and we were unable to recover it. 00:33:40.458 [2024-11-27 07:28:51.467677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.458 [2024-11-27 07:28:51.467706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.458 qpair failed and we were unable to recover it. 00:33:40.458 [2024-11-27 07:28:51.468067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.458 [2024-11-27 07:28:51.468096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.458 qpair failed and we were unable to recover it. 00:33:40.458 [2024-11-27 07:28:51.468451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.458 [2024-11-27 07:28:51.468481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.458 qpair failed and we were unable to recover it. 00:33:40.458 [2024-11-27 07:28:51.468842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.458 [2024-11-27 07:28:51.468871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.458 qpair failed and we were unable to recover it. 00:33:40.458 [2024-11-27 07:28:51.469245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.458 [2024-11-27 07:28:51.469277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.458 qpair failed and we were unable to recover it. 00:33:40.458 [2024-11-27 07:28:51.469633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.458 [2024-11-27 07:28:51.469663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.458 qpair failed and we were unable to recover it. 00:33:40.458 [2024-11-27 07:28:51.470050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.458 [2024-11-27 07:28:51.470079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.458 qpair failed and we were unable to recover it. 00:33:40.458 [2024-11-27 07:28:51.470401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.458 [2024-11-27 07:28:51.470436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.458 qpair failed and we were unable to recover it. 00:33:40.458 [2024-11-27 07:28:51.470789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.458 [2024-11-27 07:28:51.470817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.458 qpair failed and we were unable to recover it. 00:33:40.458 [2024-11-27 07:28:51.471182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.458 [2024-11-27 07:28:51.471214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.458 qpair failed and we were unable to recover it. 00:33:40.458 [2024-11-27 07:28:51.471567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.458 [2024-11-27 07:28:51.471597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.458 qpair failed and we were unable to recover it. 00:33:40.458 [2024-11-27 07:28:51.471960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.458 [2024-11-27 07:28:51.471988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.458 qpair failed and we were unable to recover it. 00:33:40.458 [2024-11-27 07:28:51.472339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.458 [2024-11-27 07:28:51.472369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.458 qpair failed and we were unable to recover it. 00:33:40.458 [2024-11-27 07:28:51.472694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.458 [2024-11-27 07:28:51.472724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.458 qpair failed and we were unable to recover it. 00:33:40.458 [2024-11-27 07:28:51.472961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.458 [2024-11-27 07:28:51.472992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.458 qpair failed and we were unable to recover it. 00:33:40.458 [2024-11-27 07:28:51.473384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.458 [2024-11-27 07:28:51.473413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.458 qpair failed and we were unable to recover it. 00:33:40.458 [2024-11-27 07:28:51.473770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.458 [2024-11-27 07:28:51.473801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.458 qpair failed and we were unable to recover it. 00:33:40.458 [2024-11-27 07:28:51.474175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.458 [2024-11-27 07:28:51.474207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.458 qpair failed and we were unable to recover it. 00:33:40.458 [2024-11-27 07:28:51.474568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.458 [2024-11-27 07:28:51.474596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.458 qpair failed and we were unable to recover it. 00:33:40.458 [2024-11-27 07:28:51.474831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.458 [2024-11-27 07:28:51.474861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.459 qpair failed and we were unable to recover it. 00:33:40.459 [2024-11-27 07:28:51.475103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.459 [2024-11-27 07:28:51.475137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.459 qpair failed and we were unable to recover it. 00:33:40.459 [2024-11-27 07:28:51.475512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.459 [2024-11-27 07:28:51.475542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.459 qpair failed and we were unable to recover it. 00:33:40.459 [2024-11-27 07:28:51.475900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.459 [2024-11-27 07:28:51.475930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.459 qpair failed and we were unable to recover it. 00:33:40.459 [2024-11-27 07:28:51.476299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.459 [2024-11-27 07:28:51.476329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.459 qpair failed and we were unable to recover it. 00:33:40.459 [2024-11-27 07:28:51.476700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.459 [2024-11-27 07:28:51.476732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.459 qpair failed and we were unable to recover it. 00:33:40.459 [2024-11-27 07:28:51.477107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.459 [2024-11-27 07:28:51.477135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.459 qpair failed and we were unable to recover it. 00:33:40.459 [2024-11-27 07:28:51.477410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.459 [2024-11-27 07:28:51.477441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.459 qpair failed and we were unable to recover it. 00:33:40.459 [2024-11-27 07:28:51.477796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.459 [2024-11-27 07:28:51.477824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.459 qpair failed and we were unable to recover it. 00:33:40.459 [2024-11-27 07:28:51.478190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.459 [2024-11-27 07:28:51.478219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.459 qpair failed and we were unable to recover it. 00:33:40.459 [2024-11-27 07:28:51.478590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.459 [2024-11-27 07:28:51.478621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.459 qpair failed and we were unable to recover it. 00:33:40.459 [2024-11-27 07:28:51.479032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.459 [2024-11-27 07:28:51.479060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.459 qpair failed and we were unable to recover it. 00:33:40.459 [2024-11-27 07:28:51.479420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.459 [2024-11-27 07:28:51.479449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.459 qpair failed and we were unable to recover it. 00:33:40.459 [2024-11-27 07:28:51.479804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.459 [2024-11-27 07:28:51.479835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.459 qpair failed and we were unable to recover it. 00:33:40.459 [2024-11-27 07:28:51.480198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.459 [2024-11-27 07:28:51.480227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.459 qpair failed and we were unable to recover it. 00:33:40.459 [2024-11-27 07:28:51.480636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.459 [2024-11-27 07:28:51.480676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.459 qpair failed and we were unable to recover it. 00:33:40.459 [2024-11-27 07:28:51.481025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.459 [2024-11-27 07:28:51.481056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.459 qpair failed and we were unable to recover it. 00:33:40.459 [2024-11-27 07:28:51.481303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.459 [2024-11-27 07:28:51.481335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.459 qpair failed and we were unable to recover it. 00:33:40.459 [2024-11-27 07:28:51.481696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.459 [2024-11-27 07:28:51.481727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.459 qpair failed and we were unable to recover it. 00:33:40.459 [2024-11-27 07:28:51.482093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.459 [2024-11-27 07:28:51.482125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.459 qpair failed and we were unable to recover it. 00:33:40.459 [2024-11-27 07:28:51.482499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.459 [2024-11-27 07:28:51.482527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.459 qpair failed and we were unable to recover it. 00:33:40.459 [2024-11-27 07:28:51.482871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.459 [2024-11-27 07:28:51.482900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.459 qpair failed and we were unable to recover it. 00:33:40.459 [2024-11-27 07:28:51.483252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.459 [2024-11-27 07:28:51.483281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.459 qpair failed and we were unable to recover it. 00:33:40.459 [2024-11-27 07:28:51.483645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.459 [2024-11-27 07:28:51.483674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.459 qpair failed and we were unable to recover it. 00:33:40.459 [2024-11-27 07:28:51.484038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.459 [2024-11-27 07:28:51.484066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.459 qpair failed and we were unable to recover it. 00:33:40.459 [2024-11-27 07:28:51.484436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.459 [2024-11-27 07:28:51.484467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.459 qpair failed and we were unable to recover it. 00:33:40.459 [2024-11-27 07:28:51.484824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.459 [2024-11-27 07:28:51.484853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.459 qpair failed and we were unable to recover it. 00:33:40.459 [2024-11-27 07:28:51.485184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.459 [2024-11-27 07:28:51.485215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.459 qpair failed and we were unable to recover it. 00:33:40.459 [2024-11-27 07:28:51.485575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.459 [2024-11-27 07:28:51.485602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.459 qpair failed and we were unable to recover it. 00:33:40.459 [2024-11-27 07:28:51.485970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.459 [2024-11-27 07:28:51.485999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.459 qpair failed and we were unable to recover it. 00:33:40.460 [2024-11-27 07:28:51.486367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.460 [2024-11-27 07:28:51.486397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.460 qpair failed and we were unable to recover it. 00:33:40.460 [2024-11-27 07:28:51.486771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.460 [2024-11-27 07:28:51.486799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.460 qpair failed and we were unable to recover it. 00:33:40.460 [2024-11-27 07:28:51.487184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.460 [2024-11-27 07:28:51.487214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.460 qpair failed and we were unable to recover it. 00:33:40.460 [2024-11-27 07:28:51.487566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.460 [2024-11-27 07:28:51.487595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.460 qpair failed and we were unable to recover it. 00:33:40.460 [2024-11-27 07:28:51.487946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.460 [2024-11-27 07:28:51.487974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.460 qpair failed and we were unable to recover it. 00:33:40.460 [2024-11-27 07:28:51.488337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.460 [2024-11-27 07:28:51.488369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.460 qpair failed and we were unable to recover it. 00:33:40.460 [2024-11-27 07:28:51.488729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.460 [2024-11-27 07:28:51.488756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.728 qpair failed and we were unable to recover it. 00:33:40.728 [2024-11-27 07:28:51.728562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.728 [2024-11-27 07:28:51.728670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.728 qpair failed and we were unable to recover it. 00:33:40.728 [2024-11-27 07:28:51.729124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.728 [2024-11-27 07:28:51.729183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.728 qpair failed and we were unable to recover it. 00:33:40.728 [2024-11-27 07:28:51.729699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.729 [2024-11-27 07:28:51.729808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.729 qpair failed and we were unable to recover it. 00:33:40.729 [2024-11-27 07:28:51.730445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.729 [2024-11-27 07:28:51.730552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.729 qpair failed and we were unable to recover it. 00:33:40.729 [2024-11-27 07:28:51.730991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.729 [2024-11-27 07:28:51.731030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.729 qpair failed and we were unable to recover it. 00:33:40.729 [2024-11-27 07:28:51.731558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.729 [2024-11-27 07:28:51.731669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.729 qpair failed and we were unable to recover it. 00:33:40.729 [2024-11-27 07:28:51.732117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.729 [2024-11-27 07:28:51.732154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.729 qpair failed and we were unable to recover it. 00:33:40.729 [2024-11-27 07:28:51.732551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.729 [2024-11-27 07:28:51.732583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.729 qpair failed and we were unable to recover it. 00:33:40.729 [2024-11-27 07:28:51.732937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.729 [2024-11-27 07:28:51.732966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.729 qpair failed and we were unable to recover it. 00:33:40.729 [2024-11-27 07:28:51.733120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.729 [2024-11-27 07:28:51.733170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.729 qpair failed and we were unable to recover it. 00:33:40.729 [2024-11-27 07:28:51.733575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.729 [2024-11-27 07:28:51.733607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.729 qpair failed and we were unable to recover it. 00:33:40.729 [2024-11-27 07:28:51.733959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.729 [2024-11-27 07:28:51.733989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.729 qpair failed and we were unable to recover it. 00:33:40.729 [2024-11-27 07:28:51.734357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.729 [2024-11-27 07:28:51.734390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.729 qpair failed and we were unable to recover it. 00:33:40.729 [2024-11-27 07:28:51.734757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.729 [2024-11-27 07:28:51.734788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.729 qpair failed and we were unable to recover it. 00:33:40.729 [2024-11-27 07:28:51.735037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.729 [2024-11-27 07:28:51.735066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.729 qpair failed and we were unable to recover it. 00:33:40.729 [2024-11-27 07:28:51.735515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.729 [2024-11-27 07:28:51.735547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.729 qpair failed and we were unable to recover it. 00:33:40.729 [2024-11-27 07:28:51.735905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.729 [2024-11-27 07:28:51.735936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.729 qpair failed and we were unable to recover it. 00:33:40.729 [2024-11-27 07:28:51.736292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.729 [2024-11-27 07:28:51.736323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.729 qpair failed and we were unable to recover it. 00:33:40.729 [2024-11-27 07:28:51.736575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.729 [2024-11-27 07:28:51.736605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.729 qpair failed and we were unable to recover it. 00:33:40.729 [2024-11-27 07:28:51.737014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.729 [2024-11-27 07:28:51.737046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.729 qpair failed and we were unable to recover it. 00:33:40.729 [2024-11-27 07:28:51.737385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.729 [2024-11-27 07:28:51.737415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.729 qpair failed and we were unable to recover it. 00:33:40.729 [2024-11-27 07:28:51.737776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.729 [2024-11-27 07:28:51.737808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.729 qpair failed and we were unable to recover it. 00:33:40.729 [2024-11-27 07:28:51.738170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.729 [2024-11-27 07:28:51.738202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.729 qpair failed and we were unable to recover it. 00:33:40.729 [2024-11-27 07:28:51.738571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.729 [2024-11-27 07:28:51.738601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.729 qpair failed and we were unable to recover it. 00:33:40.729 [2024-11-27 07:28:51.738954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.729 [2024-11-27 07:28:51.738983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.729 qpair failed and we were unable to recover it. 00:33:40.729 [2024-11-27 07:28:51.739342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.729 [2024-11-27 07:28:51.739375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.729 qpair failed and we were unable to recover it. 00:33:40.729 [2024-11-27 07:28:51.739748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.729 [2024-11-27 07:28:51.739777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.729 qpair failed and we were unable to recover it. 00:33:40.729 [2024-11-27 07:28:51.740133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.729 [2024-11-27 07:28:51.740170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.729 qpair failed and we were unable to recover it. 00:33:40.729 [2024-11-27 07:28:51.742118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.729 [2024-11-27 07:28:51.742192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.729 qpair failed and we were unable to recover it. 00:33:40.729 [2024-11-27 07:28:51.742506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.729 [2024-11-27 07:28:51.742538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.729 qpair failed and we were unable to recover it. 00:33:40.729 [2024-11-27 07:28:51.742907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.729 [2024-11-27 07:28:51.742939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.729 qpair failed and we were unable to recover it. 00:33:40.729 [2024-11-27 07:28:51.743292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.729 [2024-11-27 07:28:51.743324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.729 qpair failed and we were unable to recover it. 00:33:40.729 [2024-11-27 07:28:51.743686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.729 [2024-11-27 07:28:51.743717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.729 qpair failed and we were unable to recover it. 00:33:40.729 [2024-11-27 07:28:51.744083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.729 [2024-11-27 07:28:51.744114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.729 qpair failed and we were unable to recover it. 00:33:40.729 [2024-11-27 07:28:51.744510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.729 [2024-11-27 07:28:51.744541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.729 qpair failed and we were unable to recover it. 00:33:40.729 [2024-11-27 07:28:51.744902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.729 [2024-11-27 07:28:51.744933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.729 qpair failed and we were unable to recover it. 00:33:40.729 [2024-11-27 07:28:51.745303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.729 [2024-11-27 07:28:51.745336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.729 qpair failed and we were unable to recover it. 00:33:40.729 [2024-11-27 07:28:51.745601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.729 [2024-11-27 07:28:51.745630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.729 qpair failed and we were unable to recover it. 00:33:40.729 [2024-11-27 07:28:51.745962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.729 [2024-11-27 07:28:51.745991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.729 qpair failed and we were unable to recover it. 00:33:40.729 [2024-11-27 07:28:51.746328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.729 [2024-11-27 07:28:51.746358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.729 qpair failed and we were unable to recover it. 00:33:40.730 [2024-11-27 07:28:51.746728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.730 [2024-11-27 07:28:51.746758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.730 qpair failed and we were unable to recover it. 00:33:40.730 [2024-11-27 07:28:51.747134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.730 [2024-11-27 07:28:51.747179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.730 qpair failed and we were unable to recover it. 00:33:40.730 [2024-11-27 07:28:51.747576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.730 [2024-11-27 07:28:51.747605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.730 qpair failed and we were unable to recover it. 00:33:40.730 [2024-11-27 07:28:51.749469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.730 [2024-11-27 07:28:51.749527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.730 qpair failed and we were unable to recover it. 00:33:40.730 [2024-11-27 07:28:51.749957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.730 [2024-11-27 07:28:51.749989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.730 qpair failed and we were unable to recover it. 00:33:40.730 [2024-11-27 07:28:51.750369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.730 [2024-11-27 07:28:51.750402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.730 qpair failed and we were unable to recover it. 00:33:40.730 [2024-11-27 07:28:51.750764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.730 [2024-11-27 07:28:51.750804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.730 qpair failed and we were unable to recover it. 00:33:40.730 [2024-11-27 07:28:51.751199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.730 [2024-11-27 07:28:51.751232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.730 qpair failed and we were unable to recover it. 00:33:40.730 [2024-11-27 07:28:51.751596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.730 [2024-11-27 07:28:51.751626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.730 qpair failed and we were unable to recover it. 00:33:40.730 [2024-11-27 07:28:51.751997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.730 [2024-11-27 07:28:51.752027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.730 qpair failed and we were unable to recover it. 00:33:40.730 [2024-11-27 07:28:51.752393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.730 [2024-11-27 07:28:51.752424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.730 qpair failed and we were unable to recover it. 00:33:40.730 [2024-11-27 07:28:51.752812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.730 [2024-11-27 07:28:51.752843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.730 qpair failed and we were unable to recover it. 00:33:40.730 [2024-11-27 07:28:51.753206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.730 [2024-11-27 07:28:51.753236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.730 qpair failed and we were unable to recover it. 00:33:40.730 [2024-11-27 07:28:51.753604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.730 [2024-11-27 07:28:51.753633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.730 qpair failed and we were unable to recover it. 00:33:40.730 [2024-11-27 07:28:51.753878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.730 [2024-11-27 07:28:51.753906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.730 qpair failed and we were unable to recover it. 00:33:40.730 [2024-11-27 07:28:51.754277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.730 [2024-11-27 07:28:51.754307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.730 qpair failed and we were unable to recover it. 00:33:40.730 [2024-11-27 07:28:51.754652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.730 [2024-11-27 07:28:51.754684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.730 qpair failed and we were unable to recover it. 00:33:40.730 [2024-11-27 07:28:51.755033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.730 [2024-11-27 07:28:51.755063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.730 qpair failed and we were unable to recover it. 00:33:40.730 [2024-11-27 07:28:51.755439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.730 [2024-11-27 07:28:51.755473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.730 qpair failed and we were unable to recover it. 00:33:40.730 [2024-11-27 07:28:51.755829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.730 [2024-11-27 07:28:51.755858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.730 qpair failed and we were unable to recover it. 00:33:40.730 [2024-11-27 07:28:51.756114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.730 [2024-11-27 07:28:51.756146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.730 qpair failed and we were unable to recover it. 00:33:40.730 [2024-11-27 07:28:51.756524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.730 [2024-11-27 07:28:51.756554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.730 qpair failed and we were unable to recover it. 00:33:40.730 [2024-11-27 07:28:51.756918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.730 [2024-11-27 07:28:51.756948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.730 qpair failed and we were unable to recover it. 00:33:40.730 [2024-11-27 07:28:51.757311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.730 [2024-11-27 07:28:51.757340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.730 qpair failed and we were unable to recover it. 00:33:40.730 [2024-11-27 07:28:51.757695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.730 [2024-11-27 07:28:51.757723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.730 qpair failed and we were unable to recover it. 00:33:40.730 [2024-11-27 07:28:51.758086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.730 [2024-11-27 07:28:51.758116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.730 qpair failed and we were unable to recover it. 00:33:40.730 [2024-11-27 07:28:51.758465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.730 [2024-11-27 07:28:51.758495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.730 qpair failed and we were unable to recover it. 00:33:40.730 [2024-11-27 07:28:51.758862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.730 [2024-11-27 07:28:51.758891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.730 qpair failed and we were unable to recover it. 00:33:40.730 [2024-11-27 07:28:51.759253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.730 [2024-11-27 07:28:51.759284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.730 qpair failed and we were unable to recover it. 00:33:40.730 [2024-11-27 07:28:51.759656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.730 [2024-11-27 07:28:51.759686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.730 qpair failed and we were unable to recover it. 00:33:40.730 [2024-11-27 07:28:51.760050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.730 [2024-11-27 07:28:51.760079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.730 qpair failed and we were unable to recover it. 00:33:40.730 [2024-11-27 07:28:51.760431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.730 [2024-11-27 07:28:51.760461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.730 qpair failed and we were unable to recover it. 00:33:40.730 [2024-11-27 07:28:51.760813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.730 [2024-11-27 07:28:51.760842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.730 qpair failed and we were unable to recover it. 00:33:40.730 [2024-11-27 07:28:51.761202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.730 [2024-11-27 07:28:51.761238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.730 qpair failed and we were unable to recover it. 00:33:40.730 [2024-11-27 07:28:51.761397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.730 [2024-11-27 07:28:51.761424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.730 qpair failed and we were unable to recover it. 00:33:40.730 [2024-11-27 07:28:51.761785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.730 [2024-11-27 07:28:51.761814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.730 qpair failed and we were unable to recover it. 00:33:40.730 [2024-11-27 07:28:51.762182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.730 [2024-11-27 07:28:51.762211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.730 qpair failed and we were unable to recover it. 00:33:40.730 [2024-11-27 07:28:51.762589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.730 [2024-11-27 07:28:51.762617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.730 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.762978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.731 [2024-11-27 07:28:51.763009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.731 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.763347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.731 [2024-11-27 07:28:51.763378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.731 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.763737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.731 [2024-11-27 07:28:51.763767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.731 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.764130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.731 [2024-11-27 07:28:51.764169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.731 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.764515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.731 [2024-11-27 07:28:51.764545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.731 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.764959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.731 [2024-11-27 07:28:51.764987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.731 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.765333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.731 [2024-11-27 07:28:51.765362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.731 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.765778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.731 [2024-11-27 07:28:51.765807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.731 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.766172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.731 [2024-11-27 07:28:51.766202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.731 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.766485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.731 [2024-11-27 07:28:51.766516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.731 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.766887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.731 [2024-11-27 07:28:51.766916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.731 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.767081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.731 [2024-11-27 07:28:51.767110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.731 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.767496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.731 [2024-11-27 07:28:51.767527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.731 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.767875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.731 [2024-11-27 07:28:51.767904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.731 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.768261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.731 [2024-11-27 07:28:51.768291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.731 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.768655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.731 [2024-11-27 07:28:51.768684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.731 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.769058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.731 [2024-11-27 07:28:51.769086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.731 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.769437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.731 [2024-11-27 07:28:51.769467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.731 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.769814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.731 [2024-11-27 07:28:51.769843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.731 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.770183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.731 [2024-11-27 07:28:51.770213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.731 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.770613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.731 [2024-11-27 07:28:51.770641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.731 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.770967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.731 [2024-11-27 07:28:51.770996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.731 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.771357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.731 [2024-11-27 07:28:51.771408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.731 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.771732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.731 [2024-11-27 07:28:51.771762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.731 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.771937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.731 [2024-11-27 07:28:51.771970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.731 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.772241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.731 [2024-11-27 07:28:51.772273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.731 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.772652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.731 [2024-11-27 07:28:51.772681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.731 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.773047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.731 [2024-11-27 07:28:51.773075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.731 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.773435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.731 [2024-11-27 07:28:51.773466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.731 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.773820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.731 [2024-11-27 07:28:51.773849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.731 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.774223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.731 [2024-11-27 07:28:51.774252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.731 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.774640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.731 [2024-11-27 07:28:51.774668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.731 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.775045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.731 [2024-11-27 07:28:51.775074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.731 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.775310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.731 [2024-11-27 07:28:51.775338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.731 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.775661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.731 [2024-11-27 07:28:51.775689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.731 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.776022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.731 [2024-11-27 07:28:51.776051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.731 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.776439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.731 [2024-11-27 07:28:51.776470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.731 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.776816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.731 [2024-11-27 07:28:51.776845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.731 qpair failed and we were unable to recover it. 00:33:40.731 [2024-11-27 07:28:51.777105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.777135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.732 [2024-11-27 07:28:51.777531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.777563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.732 [2024-11-27 07:28:51.777928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.777958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.732 [2024-11-27 07:28:51.778350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.778380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.732 [2024-11-27 07:28:51.778745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.778774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.732 [2024-11-27 07:28:51.779124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.779153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.732 [2024-11-27 07:28:51.779532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.779561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.732 [2024-11-27 07:28:51.779829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.779864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.732 [2024-11-27 07:28:51.780227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.780259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.732 [2024-11-27 07:28:51.780659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.780687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.732 [2024-11-27 07:28:51.781016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.781045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.732 [2024-11-27 07:28:51.781385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.781415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.732 [2024-11-27 07:28:51.781802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.781831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.732 [2024-11-27 07:28:51.782199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.782229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.732 [2024-11-27 07:28:51.782589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.782618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.732 [2024-11-27 07:28:51.782966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.782994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.732 [2024-11-27 07:28:51.783374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.783403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.732 [2024-11-27 07:28:51.783776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.783804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.732 [2024-11-27 07:28:51.784205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.784234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.732 [2024-11-27 07:28:51.784589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.784618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.732 [2024-11-27 07:28:51.784959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.784988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.732 [2024-11-27 07:28:51.785377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.785407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.732 [2024-11-27 07:28:51.785775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.785804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.732 [2024-11-27 07:28:51.786180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.786210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.732 [2024-11-27 07:28:51.786573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.786604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.732 [2024-11-27 07:28:51.786970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.787005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.732 [2024-11-27 07:28:51.787261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.787291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.732 [2024-11-27 07:28:51.787614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.787642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.732 [2024-11-27 07:28:51.787845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.787879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.732 [2024-11-27 07:28:51.788238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.788270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.732 [2024-11-27 07:28:51.788681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.788710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.732 [2024-11-27 07:28:51.789083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.789111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.732 [2024-11-27 07:28:51.789360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.789389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.732 [2024-11-27 07:28:51.789769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.789799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.732 [2024-11-27 07:28:51.790171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.790201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.732 [2024-11-27 07:28:51.790581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.790610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.732 [2024-11-27 07:28:51.790958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.790987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.732 [2024-11-27 07:28:51.791350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.791379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.732 [2024-11-27 07:28:51.791724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.732 [2024-11-27 07:28:51.791753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.732 qpair failed and we were unable to recover it. 00:33:40.733 [2024-11-27 07:28:51.792130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.733 [2024-11-27 07:28:51.792170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.733 qpair failed and we were unable to recover it. 00:33:40.733 [2024-11-27 07:28:51.792525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.733 [2024-11-27 07:28:51.792554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.733 qpair failed and we were unable to recover it. 00:33:40.733 [2024-11-27 07:28:51.792922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.733 [2024-11-27 07:28:51.792951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.733 qpair failed and we were unable to recover it. 00:33:40.733 [2024-11-27 07:28:51.793314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.733 [2024-11-27 07:28:51.793345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.733 qpair failed and we were unable to recover it. 00:33:40.733 [2024-11-27 07:28:51.793704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.733 [2024-11-27 07:28:51.793732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.733 qpair failed and we were unable to recover it. 00:33:40.733 [2024-11-27 07:28:51.794093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.733 [2024-11-27 07:28:51.794123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.733 qpair failed and we were unable to recover it. 00:33:40.733 [2024-11-27 07:28:51.794472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.733 [2024-11-27 07:28:51.794501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.733 qpair failed and we were unable to recover it. 00:33:40.733 [2024-11-27 07:28:51.794859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.733 [2024-11-27 07:28:51.794888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.733 qpair failed and we were unable to recover it. 00:33:40.733 [2024-11-27 07:28:51.795228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.733 [2024-11-27 07:28:51.795259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.733 qpair failed and we were unable to recover it. 00:33:40.733 [2024-11-27 07:28:51.795505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.733 [2024-11-27 07:28:51.795537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.733 qpair failed and we were unable to recover it. 00:33:40.733 [2024-11-27 07:28:51.795884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.733 [2024-11-27 07:28:51.795912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.733 qpair failed and we were unable to recover it. 00:33:40.733 [2024-11-27 07:28:51.796365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.733 [2024-11-27 07:28:51.796395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.733 qpair failed and we were unable to recover it. 00:33:40.733 [2024-11-27 07:28:51.796751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.733 [2024-11-27 07:28:51.796780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.733 qpair failed and we were unable to recover it. 00:33:40.733 [2024-11-27 07:28:51.797122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.733 [2024-11-27 07:28:51.797157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.733 qpair failed and we were unable to recover it. 00:33:40.733 [2024-11-27 07:28:51.797494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.733 [2024-11-27 07:28:51.797524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.733 qpair failed and we were unable to recover it. 00:33:40.733 [2024-11-27 07:28:51.797883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.733 [2024-11-27 07:28:51.797913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.733 qpair failed and we were unable to recover it. 00:33:40.733 [2024-11-27 07:28:51.798262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.733 [2024-11-27 07:28:51.798293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.733 qpair failed and we were unable to recover it. 00:33:40.733 [2024-11-27 07:28:51.798668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.733 [2024-11-27 07:28:51.798696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.733 qpair failed and we were unable to recover it. 00:33:40.733 [2024-11-27 07:28:51.799047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.733 [2024-11-27 07:28:51.799076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.733 qpair failed and we were unable to recover it. 00:33:40.733 [2024-11-27 07:28:51.799465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.733 [2024-11-27 07:28:51.799496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.733 qpair failed and we were unable to recover it. 00:33:40.733 [2024-11-27 07:28:51.799834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.733 [2024-11-27 07:28:51.799865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.733 qpair failed and we were unable to recover it. 00:33:40.733 [2024-11-27 07:28:51.800228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.733 [2024-11-27 07:28:51.800259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.733 qpair failed and we were unable to recover it. 00:33:40.733 [2024-11-27 07:28:51.800631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.733 [2024-11-27 07:28:51.800660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.733 qpair failed and we were unable to recover it. 00:33:40.733 [2024-11-27 07:28:51.801027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.733 [2024-11-27 07:28:51.801062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.733 qpair failed and we were unable to recover it. 00:33:40.733 [2024-11-27 07:28:51.801448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.733 [2024-11-27 07:28:51.801479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.733 qpair failed and we were unable to recover it. 00:33:40.733 [2024-11-27 07:28:51.801912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.733 [2024-11-27 07:28:51.801941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.733 qpair failed and we were unable to recover it. 00:33:40.733 [2024-11-27 07:28:51.802315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.733 [2024-11-27 07:28:51.802346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.733 qpair failed and we were unable to recover it. 00:33:40.733 [2024-11-27 07:28:51.802701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.733 [2024-11-27 07:28:51.802732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.733 qpair failed and we were unable to recover it. 00:33:40.733 [2024-11-27 07:28:51.803089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.733 [2024-11-27 07:28:51.803117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.733 qpair failed and we were unable to recover it. 00:33:40.733 [2024-11-27 07:28:51.803488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.733 [2024-11-27 07:28:51.803518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.733 qpair failed and we were unable to recover it. 00:33:40.733 [2024-11-27 07:28:51.803868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.733 [2024-11-27 07:28:51.803900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.733 qpair failed and we were unable to recover it. 00:33:40.733 [2024-11-27 07:28:51.804244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.733 [2024-11-27 07:28:51.804274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.804657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.734 [2024-11-27 07:28:51.804687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.805040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.734 [2024-11-27 07:28:51.805070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.805442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.734 [2024-11-27 07:28:51.805472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.805828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.734 [2024-11-27 07:28:51.805857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.806216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.734 [2024-11-27 07:28:51.806245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.806615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.734 [2024-11-27 07:28:51.806646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.807006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.734 [2024-11-27 07:28:51.807034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.807392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.734 [2024-11-27 07:28:51.807421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.807782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.734 [2024-11-27 07:28:51.807824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.808191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.734 [2024-11-27 07:28:51.808221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.808598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.734 [2024-11-27 07:28:51.808626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.808975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.734 [2024-11-27 07:28:51.809006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.809431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.734 [2024-11-27 07:28:51.809461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.809809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.734 [2024-11-27 07:28:51.809837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.810198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.734 [2024-11-27 07:28:51.810229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.810626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.734 [2024-11-27 07:28:51.810656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.811002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.734 [2024-11-27 07:28:51.811033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.811403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.734 [2024-11-27 07:28:51.811433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.811788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.734 [2024-11-27 07:28:51.811816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.812183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.734 [2024-11-27 07:28:51.812214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.812587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.734 [2024-11-27 07:28:51.812616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.812977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.734 [2024-11-27 07:28:51.813008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.813387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.734 [2024-11-27 07:28:51.813417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.813759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.734 [2024-11-27 07:28:51.813789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.814130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.734 [2024-11-27 07:28:51.814181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.814569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.734 [2024-11-27 07:28:51.814598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.814977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.734 [2024-11-27 07:28:51.815005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.815222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.734 [2024-11-27 07:28:51.815256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.815614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.734 [2024-11-27 07:28:51.815644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.816003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.734 [2024-11-27 07:28:51.816032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.816301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.734 [2024-11-27 07:28:51.816331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.816661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.734 [2024-11-27 07:28:51.816689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.817050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.734 [2024-11-27 07:28:51.817079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.817509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.734 [2024-11-27 07:28:51.817539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.817921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.734 [2024-11-27 07:28:51.817952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.818403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.734 [2024-11-27 07:28:51.818434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.818787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.734 [2024-11-27 07:28:51.818816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.819185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.734 [2024-11-27 07:28:51.819216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.734 qpair failed and we were unable to recover it. 00:33:40.734 [2024-11-27 07:28:51.819538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.819566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.735 [2024-11-27 07:28:51.819939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.819971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.735 [2024-11-27 07:28:51.820223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.820257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.735 [2024-11-27 07:28:51.820656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.820684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.735 [2024-11-27 07:28:51.820969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.820997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.735 [2024-11-27 07:28:51.821353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.821383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.735 [2024-11-27 07:28:51.821758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.821787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.735 [2024-11-27 07:28:51.822225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.822257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.735 [2024-11-27 07:28:51.822515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.822544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.735 [2024-11-27 07:28:51.822931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.822960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.735 [2024-11-27 07:28:51.823319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.823350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.735 [2024-11-27 07:28:51.823707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.823736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.735 [2024-11-27 07:28:51.824098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.824126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.735 [2024-11-27 07:28:51.824379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.824409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.735 [2024-11-27 07:28:51.824783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.824812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.735 [2024-11-27 07:28:51.825182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.825213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.735 [2024-11-27 07:28:51.825476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.825508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.735 [2024-11-27 07:28:51.825871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.825900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.735 [2024-11-27 07:28:51.826268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.826299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.735 [2024-11-27 07:28:51.826661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.826690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.735 [2024-11-27 07:28:51.827052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.827084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.735 [2024-11-27 07:28:51.827477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.827508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.735 [2024-11-27 07:28:51.827873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.827902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.735 [2024-11-27 07:28:51.828250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.828282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.735 [2024-11-27 07:28:51.828645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.828673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.735 [2024-11-27 07:28:51.829046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.829077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.735 [2024-11-27 07:28:51.829427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.829458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.735 [2024-11-27 07:28:51.829810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.829839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.735 [2024-11-27 07:28:51.830209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.830239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.735 [2024-11-27 07:28:51.830563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.830592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.735 [2024-11-27 07:28:51.830972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.831001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.735 [2024-11-27 07:28:51.831370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.831400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.735 [2024-11-27 07:28:51.831774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.831812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.735 [2024-11-27 07:28:51.832099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.832128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.735 [2024-11-27 07:28:51.832558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.832589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.735 [2024-11-27 07:28:51.832831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.832859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.735 [2024-11-27 07:28:51.833199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.833229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.735 [2024-11-27 07:28:51.833577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.833608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.735 [2024-11-27 07:28:51.833972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.735 [2024-11-27 07:28:51.834006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.735 qpair failed and we were unable to recover it. 00:33:40.736 [2024-11-27 07:28:51.834355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.736 [2024-11-27 07:28:51.834387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.736 qpair failed and we were unable to recover it. 00:33:40.736 [2024-11-27 07:28:51.834747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.736 [2024-11-27 07:28:51.834784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.736 qpair failed and we were unable to recover it. 00:33:40.736 [2024-11-27 07:28:51.835120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.736 [2024-11-27 07:28:51.835149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.736 qpair failed and we were unable to recover it. 00:33:40.736 [2024-11-27 07:28:51.835475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.736 [2024-11-27 07:28:51.835504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.736 qpair failed and we were unable to recover it. 00:33:40.736 [2024-11-27 07:28:51.835866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.736 [2024-11-27 07:28:51.835896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.736 qpair failed and we were unable to recover it. 00:33:40.736 [2024-11-27 07:28:51.836254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.736 [2024-11-27 07:28:51.836286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.736 qpair failed and we were unable to recover it. 00:33:40.736 [2024-11-27 07:28:51.836623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.736 [2024-11-27 07:28:51.836653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.736 qpair failed and we were unable to recover it. 00:33:40.736 [2024-11-27 07:28:51.837016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.736 [2024-11-27 07:28:51.837043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.736 qpair failed and we were unable to recover it. 00:33:40.736 [2024-11-27 07:28:51.837385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.736 [2024-11-27 07:28:51.837416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.736 qpair failed and we were unable to recover it. 00:33:40.736 [2024-11-27 07:28:51.837779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.736 [2024-11-27 07:28:51.837809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.736 qpair failed and we were unable to recover it. 00:33:40.736 [2024-11-27 07:28:51.838177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.736 [2024-11-27 07:28:51.838207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.736 qpair failed and we were unable to recover it. 00:33:40.736 [2024-11-27 07:28:51.838550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.736 [2024-11-27 07:28:51.838580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.736 qpair failed and we were unable to recover it. 00:33:40.736 [2024-11-27 07:28:51.838936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.736 [2024-11-27 07:28:51.838964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.736 qpair failed and we were unable to recover it. 00:33:40.736 [2024-11-27 07:28:51.839337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.736 [2024-11-27 07:28:51.839367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.736 qpair failed and we were unable to recover it. 00:33:40.736 [2024-11-27 07:28:51.839734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.736 [2024-11-27 07:28:51.839764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.736 qpair failed and we were unable to recover it. 00:33:40.736 [2024-11-27 07:28:51.840124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.736 [2024-11-27 07:28:51.840154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.736 qpair failed and we were unable to recover it. 00:33:40.736 [2024-11-27 07:28:51.840539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.736 [2024-11-27 07:28:51.840568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.736 qpair failed and we were unable to recover it. 00:33:40.736 [2024-11-27 07:28:51.840922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.736 [2024-11-27 07:28:51.840951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.736 qpair failed and we were unable to recover it. 00:33:40.736 [2024-11-27 07:28:51.841200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.736 [2024-11-27 07:28:51.841231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.736 qpair failed and we were unable to recover it. 00:33:40.736 [2024-11-27 07:28:51.841581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.736 [2024-11-27 07:28:51.841609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.736 qpair failed and we were unable to recover it. 00:33:40.736 [2024-11-27 07:28:51.841982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.736 [2024-11-27 07:28:51.842011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.736 qpair failed and we were unable to recover it. 00:33:40.736 [2024-11-27 07:28:51.842371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.736 [2024-11-27 07:28:51.842401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.736 qpair failed and we were unable to recover it. 00:33:40.736 [2024-11-27 07:28:51.842754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.736 [2024-11-27 07:28:51.842785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.736 qpair failed and we were unable to recover it. 00:33:40.736 [2024-11-27 07:28:51.843126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.736 [2024-11-27 07:28:51.843155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.736 qpair failed and we were unable to recover it. 00:33:40.736 [2024-11-27 07:28:51.843404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.736 [2024-11-27 07:28:51.843432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.736 qpair failed and we were unable to recover it. 00:33:40.736 [2024-11-27 07:28:51.843793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.736 [2024-11-27 07:28:51.843823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.736 qpair failed and we were unable to recover it. 00:33:40.736 [2024-11-27 07:28:51.844192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.736 [2024-11-27 07:28:51.844229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.736 qpair failed and we were unable to recover it. 00:33:40.736 [2024-11-27 07:28:51.844602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.736 [2024-11-27 07:28:51.844631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.736 qpair failed and we were unable to recover it. 00:33:40.736 [2024-11-27 07:28:51.844994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.736 [2024-11-27 07:28:51.845023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.736 qpair failed and we were unable to recover it. 00:33:40.736 [2024-11-27 07:28:51.845395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.736 [2024-11-27 07:28:51.845425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.736 qpair failed and we were unable to recover it. 00:33:40.736 [2024-11-27 07:28:51.845780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.736 [2024-11-27 07:28:51.845809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.736 qpair failed and we were unable to recover it. 00:33:40.736 [2024-11-27 07:28:51.846184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.736 [2024-11-27 07:28:51.846213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.736 qpair failed and we were unable to recover it. 00:33:40.736 [2024-11-27 07:28:51.846547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.736 [2024-11-27 07:28:51.846576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.736 qpair failed and we were unable to recover it. 00:33:40.736 [2024-11-27 07:28:51.846942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.846971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.737 qpair failed and we were unable to recover it. 00:33:40.737 [2024-11-27 07:28:51.847319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.847350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.737 qpair failed and we were unable to recover it. 00:33:40.737 [2024-11-27 07:28:51.847722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.847750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.737 qpair failed and we were unable to recover it. 00:33:40.737 [2024-11-27 07:28:51.848185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.848215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.737 qpair failed and we were unable to recover it. 00:33:40.737 [2024-11-27 07:28:51.848611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.848640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.737 qpair failed and we were unable to recover it. 00:33:40.737 [2024-11-27 07:28:51.848982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.849012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.737 qpair failed and we were unable to recover it. 00:33:40.737 [2024-11-27 07:28:51.849348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.849379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.737 qpair failed and we were unable to recover it. 00:33:40.737 [2024-11-27 07:28:51.849745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.849774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.737 qpair failed and we were unable to recover it. 00:33:40.737 [2024-11-27 07:28:51.850131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.850184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.737 qpair failed and we were unable to recover it. 00:33:40.737 [2024-11-27 07:28:51.850546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.850576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.737 qpair failed and we were unable to recover it. 00:33:40.737 [2024-11-27 07:28:51.850947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.850975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.737 qpair failed and we were unable to recover it. 00:33:40.737 [2024-11-27 07:28:51.851212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.851241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.737 qpair failed and we were unable to recover it. 00:33:40.737 [2024-11-27 07:28:51.851643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.851674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.737 qpair failed and we were unable to recover it. 00:33:40.737 [2024-11-27 07:28:51.852021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.852050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.737 qpair failed and we were unable to recover it. 00:33:40.737 [2024-11-27 07:28:51.852302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.852336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.737 qpair failed and we were unable to recover it. 00:33:40.737 [2024-11-27 07:28:51.852680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.852711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.737 qpair failed and we were unable to recover it. 00:33:40.737 [2024-11-27 07:28:51.853079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.853107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.737 qpair failed and we were unable to recover it. 00:33:40.737 [2024-11-27 07:28:51.853482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.853513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.737 qpair failed and we were unable to recover it. 00:33:40.737 [2024-11-27 07:28:51.853858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.853889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.737 qpair failed and we were unable to recover it. 00:33:40.737 [2024-11-27 07:28:51.854133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.854174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.737 qpair failed and we were unable to recover it. 00:33:40.737 [2024-11-27 07:28:51.854565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.854600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.737 qpair failed and we were unable to recover it. 00:33:40.737 [2024-11-27 07:28:51.854939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.854969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.737 qpair failed and we were unable to recover it. 00:33:40.737 [2024-11-27 07:28:51.855345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.855375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.737 qpair failed and we were unable to recover it. 00:33:40.737 [2024-11-27 07:28:51.855728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.855757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.737 qpair failed and we were unable to recover it. 00:33:40.737 [2024-11-27 07:28:51.856117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.856146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.737 qpair failed and we were unable to recover it. 00:33:40.737 [2024-11-27 07:28:51.856521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.856550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.737 qpair failed and we were unable to recover it. 00:33:40.737 [2024-11-27 07:28:51.856912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.856941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.737 qpair failed and we were unable to recover it. 00:33:40.737 [2024-11-27 07:28:51.857294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.857324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.737 qpair failed and we were unable to recover it. 00:33:40.737 [2024-11-27 07:28:51.857702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.857731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.737 qpair failed and we were unable to recover it. 00:33:40.737 [2024-11-27 07:28:51.858110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.858140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.737 qpair failed and we were unable to recover it. 00:33:40.737 [2024-11-27 07:28:51.858521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.858552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.737 qpair failed and we were unable to recover it. 00:33:40.737 [2024-11-27 07:28:51.858914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.858942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.737 qpair failed and we were unable to recover it. 00:33:40.737 [2024-11-27 07:28:51.859306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.859336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.737 qpair failed and we were unable to recover it. 00:33:40.737 [2024-11-27 07:28:51.859699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.859729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.737 qpair failed and we were unable to recover it. 00:33:40.737 [2024-11-27 07:28:51.860087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.860116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.737 qpair failed and we were unable to recover it. 00:33:40.737 [2024-11-27 07:28:51.860465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.860496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.737 qpair failed and we were unable to recover it. 00:33:40.737 [2024-11-27 07:28:51.860849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.860877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.737 qpair failed and we were unable to recover it. 00:33:40.737 [2024-11-27 07:28:51.861228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.861258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.737 qpair failed and we were unable to recover it. 00:33:40.737 [2024-11-27 07:28:51.861541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.737 [2024-11-27 07:28:51.861569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.738 qpair failed and we were unable to recover it. 00:33:40.738 [2024-11-27 07:28:51.861954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.738 [2024-11-27 07:28:51.861982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.738 qpair failed and we were unable to recover it. 00:33:40.738 [2024-11-27 07:28:51.862249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.738 [2024-11-27 07:28:51.862277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.738 qpair failed and we were unable to recover it. 00:33:40.738 [2024-11-27 07:28:51.862619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.738 [2024-11-27 07:28:51.862649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.738 qpair failed and we were unable to recover it. 00:33:40.738 [2024-11-27 07:28:51.863015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.738 [2024-11-27 07:28:51.863045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.738 qpair failed and we were unable to recover it. 00:33:40.738 [2024-11-27 07:28:51.863394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.738 [2024-11-27 07:28:51.863424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.738 qpair failed and we were unable to recover it. 00:33:40.738 [2024-11-27 07:28:51.863782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.738 [2024-11-27 07:28:51.863811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.738 qpair failed and we were unable to recover it. 00:33:40.738 [2024-11-27 07:28:51.864204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.738 [2024-11-27 07:28:51.864234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.738 qpair failed and we were unable to recover it. 00:33:40.738 [2024-11-27 07:28:51.864512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.738 [2024-11-27 07:28:51.864540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.738 qpair failed and we were unable to recover it. 00:33:40.738 [2024-11-27 07:28:51.864900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.738 [2024-11-27 07:28:51.864929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.738 qpair failed and we were unable to recover it. 00:33:40.738 [2024-11-27 07:28:51.865301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.738 [2024-11-27 07:28:51.865331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.738 qpair failed and we were unable to recover it. 00:33:40.738 [2024-11-27 07:28:51.865693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.738 [2024-11-27 07:28:51.865723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.738 qpair failed and we were unable to recover it. 00:33:40.738 [2024-11-27 07:28:51.866080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.738 [2024-11-27 07:28:51.866108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.738 qpair failed and we were unable to recover it. 00:33:40.738 [2024-11-27 07:28:51.866475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.738 [2024-11-27 07:28:51.866505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.738 qpair failed and we were unable to recover it. 00:33:40.738 [2024-11-27 07:28:51.866843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.738 [2024-11-27 07:28:51.866871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.738 qpair failed and we were unable to recover it. 00:33:40.738 [2024-11-27 07:28:51.867204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.738 [2024-11-27 07:28:51.867235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.738 qpair failed and we were unable to recover it. 00:33:40.738 [2024-11-27 07:28:51.867595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.738 [2024-11-27 07:28:51.867624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.738 qpair failed and we were unable to recover it. 00:33:40.738 [2024-11-27 07:28:51.867974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.738 [2024-11-27 07:28:51.868004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.738 qpair failed and we were unable to recover it. 00:33:40.738 [2024-11-27 07:28:51.868369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.738 [2024-11-27 07:28:51.868398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.738 qpair failed and we were unable to recover it. 00:33:40.738 [2024-11-27 07:28:51.868798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.738 [2024-11-27 07:28:51.868826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.738 qpair failed and we were unable to recover it. 00:33:40.738 [2024-11-27 07:28:51.869180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.738 [2024-11-27 07:28:51.869211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.738 qpair failed and we were unable to recover it. 00:33:40.738 [2024-11-27 07:28:51.869561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.738 [2024-11-27 07:28:51.869590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.738 qpair failed and we were unable to recover it. 00:33:40.738 [2024-11-27 07:28:51.869848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.738 [2024-11-27 07:28:51.869877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.738 qpair failed and we were unable to recover it. 00:33:40.738 [2024-11-27 07:28:51.870266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.738 [2024-11-27 07:28:51.870302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.738 qpair failed and we were unable to recover it. 00:33:40.738 [2024-11-27 07:28:51.870642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.738 [2024-11-27 07:28:51.870671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.738 qpair failed and we were unable to recover it. 00:33:40.738 [2024-11-27 07:28:51.871024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.738 [2024-11-27 07:28:51.871053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.738 qpair failed and we were unable to recover it. 00:33:40.738 [2024-11-27 07:28:51.871392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.738 [2024-11-27 07:28:51.871422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.738 qpair failed and we were unable to recover it. 00:33:40.738 [2024-11-27 07:28:51.871790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.738 [2024-11-27 07:28:51.871821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.738 qpair failed and we were unable to recover it. 00:33:40.738 [2024-11-27 07:28:51.872196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.738 [2024-11-27 07:28:51.872228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.738 qpair failed and we were unable to recover it. 00:33:40.738 [2024-11-27 07:28:51.872514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.738 [2024-11-27 07:28:51.872543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.738 qpair failed and we were unable to recover it. 00:33:40.738 [2024-11-27 07:28:51.872919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.738 [2024-11-27 07:28:51.872948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.738 qpair failed and we were unable to recover it. 00:33:40.738 [2024-11-27 07:28:51.873324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.738 [2024-11-27 07:28:51.873354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.738 qpair failed and we were unable to recover it. 00:33:40.738 [2024-11-27 07:28:51.873720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.738 [2024-11-27 07:28:51.873749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.738 qpair failed and we were unable to recover it. 00:33:40.738 [2024-11-27 07:28:51.874156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.738 [2024-11-27 07:28:51.874196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.738 qpair failed and we were unable to recover it. 00:33:40.739 [2024-11-27 07:28:51.874591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.739 [2024-11-27 07:28:51.874619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.739 qpair failed and we were unable to recover it. 00:33:40.739 [2024-11-27 07:28:51.874970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.739 [2024-11-27 07:28:51.874999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.739 qpair failed and we were unable to recover it. 00:33:40.739 [2024-11-27 07:28:51.875353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.739 [2024-11-27 07:28:51.875383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.739 qpair failed and we were unable to recover it. 00:33:40.739 [2024-11-27 07:28:51.875775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.739 [2024-11-27 07:28:51.875805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.739 qpair failed and we were unable to recover it. 00:33:40.739 [2024-11-27 07:28:51.876178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.739 [2024-11-27 07:28:51.876210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.739 qpair failed and we were unable to recover it. 00:33:40.739 [2024-11-27 07:28:51.876551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.739 [2024-11-27 07:28:51.876579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.739 qpair failed and we were unable to recover it. 00:33:40.739 [2024-11-27 07:28:51.876986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.739 [2024-11-27 07:28:51.877014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.739 qpair failed and we were unable to recover it. 00:33:40.739 [2024-11-27 07:28:51.877356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.739 [2024-11-27 07:28:51.877386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.739 qpair failed and we were unable to recover it. 00:33:40.739 [2024-11-27 07:28:51.877756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.739 [2024-11-27 07:28:51.877791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.739 qpair failed and we were unable to recover it. 00:33:40.739 [2024-11-27 07:28:51.878034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.739 [2024-11-27 07:28:51.878061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.739 qpair failed and we were unable to recover it. 00:33:40.739 [2024-11-27 07:28:51.878447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.739 [2024-11-27 07:28:51.878478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.739 qpair failed and we were unable to recover it. 00:33:40.739 [2024-11-27 07:28:51.878815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.739 [2024-11-27 07:28:51.878845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.739 qpair failed and we were unable to recover it. 00:33:40.739 [2024-11-27 07:28:51.879191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.739 [2024-11-27 07:28:51.879221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.739 qpair failed and we were unable to recover it. 00:33:40.739 [2024-11-27 07:28:51.879626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.739 [2024-11-27 07:28:51.879654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.739 qpair failed and we were unable to recover it. 00:33:40.739 [2024-11-27 07:28:51.879997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.739 [2024-11-27 07:28:51.880026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.739 qpair failed and we were unable to recover it. 00:33:40.739 [2024-11-27 07:28:51.880339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.739 [2024-11-27 07:28:51.880369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.739 qpair failed and we were unable to recover it. 00:33:40.739 [2024-11-27 07:28:51.880746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.739 [2024-11-27 07:28:51.880780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.739 qpair failed and we were unable to recover it. 00:33:40.739 [2024-11-27 07:28:51.881215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.739 [2024-11-27 07:28:51.881246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.739 qpair failed and we were unable to recover it. 00:33:40.739 [2024-11-27 07:28:51.881519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.739 [2024-11-27 07:28:51.881547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.739 qpair failed and we were unable to recover it. 00:33:40.739 [2024-11-27 07:28:51.881918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.739 [2024-11-27 07:28:51.881947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.739 qpair failed and we were unable to recover it. 00:33:40.739 [2024-11-27 07:28:51.882354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.739 [2024-11-27 07:28:51.882384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.739 qpair failed and we were unable to recover it. 00:33:40.739 [2024-11-27 07:28:51.882750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.739 [2024-11-27 07:28:51.882780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.739 qpair failed and we were unable to recover it. 00:33:40.739 [2024-11-27 07:28:51.883132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.739 [2024-11-27 07:28:51.883177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.739 qpair failed and we were unable to recover it. 00:33:40.739 [2024-11-27 07:28:51.883543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.739 [2024-11-27 07:28:51.883571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.739 qpair failed and we were unable to recover it. 00:33:40.739 [2024-11-27 07:28:51.883935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.739 [2024-11-27 07:28:51.883963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.739 qpair failed and we were unable to recover it. 00:33:40.739 [2024-11-27 07:28:51.884228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.739 [2024-11-27 07:28:51.884262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.739 qpair failed and we were unable to recover it. 00:33:40.739 [2024-11-27 07:28:51.884638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.739 [2024-11-27 07:28:51.884667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.739 qpair failed and we were unable to recover it. 00:33:40.739 [2024-11-27 07:28:51.884928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.739 [2024-11-27 07:28:51.884956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.739 qpair failed and we were unable to recover it. 00:33:40.739 [2024-11-27 07:28:51.885364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.739 [2024-11-27 07:28:51.885394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.739 qpair failed and we were unable to recover it. 00:33:40.739 [2024-11-27 07:28:51.885759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.739 [2024-11-27 07:28:51.885787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.739 qpair failed and we were unable to recover it. 00:33:40.739 [2024-11-27 07:28:51.886155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.739 [2024-11-27 07:28:51.886197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.739 qpair failed and we were unable to recover it. 00:33:40.739 [2024-11-27 07:28:51.886550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.739 [2024-11-27 07:28:51.886579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.739 qpair failed and we were unable to recover it. 00:33:40.739 [2024-11-27 07:28:51.886948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.739 [2024-11-27 07:28:51.886977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.739 qpair failed and we were unable to recover it. 00:33:40.739 [2024-11-27 07:28:51.887235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.739 [2024-11-27 07:28:51.887264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.739 qpair failed and we were unable to recover it. 00:33:40.739 [2024-11-27 07:28:51.887618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.739 [2024-11-27 07:28:51.887648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.739 qpair failed and we were unable to recover it. 00:33:40.739 [2024-11-27 07:28:51.888023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.739 [2024-11-27 07:28:51.888052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.739 qpair failed and we were unable to recover it. 00:33:40.739 [2024-11-27 07:28:51.888412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.739 [2024-11-27 07:28:51.888442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.739 qpair failed and we were unable to recover it. 00:33:40.739 [2024-11-27 07:28:51.888800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.740 [2024-11-27 07:28:51.888828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.740 qpair failed and we were unable to recover it. 00:33:40.740 [2024-11-27 07:28:51.889188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.740 [2024-11-27 07:28:51.889217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.740 qpair failed and we were unable to recover it. 00:33:40.740 [2024-11-27 07:28:51.889611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.740 [2024-11-27 07:28:51.889641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.740 qpair failed and we were unable to recover it. 00:33:40.740 [2024-11-27 07:28:51.890007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.740 [2024-11-27 07:28:51.890036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.740 qpair failed and we were unable to recover it. 00:33:40.740 [2024-11-27 07:28:51.890372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.740 [2024-11-27 07:28:51.890402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.740 qpair failed and we were unable to recover it. 00:33:40.740 [2024-11-27 07:28:51.890753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.740 [2024-11-27 07:28:51.890782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.740 qpair failed and we were unable to recover it. 00:33:40.740 [2024-11-27 07:28:51.891147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.740 [2024-11-27 07:28:51.891196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.740 qpair failed and we were unable to recover it. 00:33:40.740 [2024-11-27 07:28:51.891552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.740 [2024-11-27 07:28:51.891581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.740 qpair failed and we were unable to recover it. 00:33:40.740 [2024-11-27 07:28:51.891943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.740 [2024-11-27 07:28:51.891973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.740 qpair failed and we were unable to recover it. 00:33:40.740 [2024-11-27 07:28:51.892229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.740 [2024-11-27 07:28:51.892260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.740 qpair failed and we were unable to recover it. 00:33:40.740 [2024-11-27 07:28:51.892610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.740 [2024-11-27 07:28:51.892638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.740 qpair failed and we were unable to recover it. 00:33:40.740 [2024-11-27 07:28:51.893004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.740 [2024-11-27 07:28:51.893033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.740 qpair failed and we were unable to recover it. 00:33:40.740 [2024-11-27 07:28:51.893396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.740 [2024-11-27 07:28:51.893426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.740 qpair failed and we were unable to recover it. 00:33:40.740 [2024-11-27 07:28:51.893785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.740 [2024-11-27 07:28:51.893813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.740 qpair failed and we were unable to recover it. 00:33:40.740 [2024-11-27 07:28:51.894180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.740 [2024-11-27 07:28:51.894211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.740 qpair failed and we were unable to recover it. 00:33:40.740 [2024-11-27 07:28:51.894575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.740 [2024-11-27 07:28:51.894605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.740 qpair failed and we were unable to recover it. 00:33:40.740 [2024-11-27 07:28:51.894962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.740 [2024-11-27 07:28:51.894991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.740 qpair failed and we were unable to recover it. 00:33:40.740 [2024-11-27 07:28:51.895345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.740 [2024-11-27 07:28:51.895373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.740 qpair failed and we were unable to recover it. 00:33:40.740 [2024-11-27 07:28:51.895772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.740 [2024-11-27 07:28:51.895801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.740 qpair failed and we were unable to recover it. 00:33:40.740 [2024-11-27 07:28:51.896176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.740 [2024-11-27 07:28:51.896206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.740 qpair failed and we were unable to recover it. 00:33:40.740 [2024-11-27 07:28:51.896565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.740 [2024-11-27 07:28:51.896596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.740 qpair failed and we were unable to recover it. 00:33:40.740 [2024-11-27 07:28:51.896964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.740 [2024-11-27 07:28:51.896992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.740 qpair failed and we were unable to recover it. 00:33:40.740 [2024-11-27 07:28:51.897353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.740 [2024-11-27 07:28:51.897385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.740 qpair failed and we were unable to recover it. 00:33:40.740 [2024-11-27 07:28:51.897743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.740 [2024-11-27 07:28:51.897772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.740 qpair failed and we were unable to recover it. 00:33:40.740 [2024-11-27 07:28:51.898135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.740 [2024-11-27 07:28:51.898183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.740 qpair failed and we were unable to recover it. 00:33:40.740 [2024-11-27 07:28:51.898417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.740 [2024-11-27 07:28:51.898449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.740 qpair failed and we were unable to recover it. 00:33:40.740 [2024-11-27 07:28:51.898816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.740 [2024-11-27 07:28:51.898846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.740 qpair failed and we were unable to recover it. 00:33:40.740 [2024-11-27 07:28:51.899213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.740 [2024-11-27 07:28:51.899243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.740 qpair failed and we were unable to recover it. 00:33:40.740 [2024-11-27 07:28:51.899597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.740 [2024-11-27 07:28:51.899626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.740 qpair failed and we were unable to recover it. 00:33:40.740 [2024-11-27 07:28:51.899992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.740 [2024-11-27 07:28:51.900021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.740 qpair failed and we were unable to recover it. 00:33:40.740 [2024-11-27 07:28:51.900425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.740 [2024-11-27 07:28:51.900455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.740 qpair failed and we were unable to recover it. 00:33:40.740 [2024-11-27 07:28:51.900810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.740 [2024-11-27 07:28:51.900840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.740 qpair failed and we were unable to recover it. 00:33:40.740 [2024-11-27 07:28:51.901253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.740 [2024-11-27 07:28:51.901283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.740 qpair failed and we were unable to recover it. 00:33:40.740 [2024-11-27 07:28:51.901635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.740 [2024-11-27 07:28:51.901664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.740 qpair failed and we were unable to recover it. 00:33:40.740 [2024-11-27 07:28:51.902025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.740 [2024-11-27 07:28:51.902054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.740 qpair failed and we were unable to recover it. 00:33:40.740 [2024-11-27 07:28:51.902414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.740 [2024-11-27 07:28:51.902445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.740 qpair failed and we were unable to recover it. 00:33:40.740 [2024-11-27 07:28:51.902802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.741 [2024-11-27 07:28:51.902830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.741 qpair failed and we were unable to recover it. 00:33:40.741 [2024-11-27 07:28:51.903194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.741 [2024-11-27 07:28:51.903225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.741 qpair failed and we were unable to recover it. 00:33:40.741 [2024-11-27 07:28:51.903626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.741 [2024-11-27 07:28:51.903656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.741 qpair failed and we were unable to recover it. 00:33:40.741 [2024-11-27 07:28:51.904015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.741 [2024-11-27 07:28:51.904043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.741 qpair failed and we were unable to recover it. 00:33:40.741 [2024-11-27 07:28:51.904487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.741 [2024-11-27 07:28:51.904517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.741 qpair failed and we were unable to recover it. 00:33:40.741 [2024-11-27 07:28:51.904817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.741 [2024-11-27 07:28:51.904847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.741 qpair failed and we were unable to recover it. 00:33:40.741 [2024-11-27 07:28:51.905104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.741 [2024-11-27 07:28:51.905132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.741 qpair failed and we were unable to recover it. 00:33:40.741 [2024-11-27 07:28:51.905513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.741 [2024-11-27 07:28:51.905544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.741 qpair failed and we were unable to recover it. 00:33:40.741 [2024-11-27 07:28:51.905914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.741 [2024-11-27 07:28:51.905944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.741 qpair failed and we were unable to recover it. 00:33:40.741 [2024-11-27 07:28:51.906319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.741 [2024-11-27 07:28:51.906350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.741 qpair failed and we were unable to recover it. 00:33:40.741 [2024-11-27 07:28:51.906687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.741 [2024-11-27 07:28:51.906717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.741 qpair failed and we were unable to recover it. 00:33:40.741 [2024-11-27 07:28:51.907074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.741 [2024-11-27 07:28:51.907103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.741 qpair failed and we were unable to recover it. 00:33:40.741 [2024-11-27 07:28:51.907468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.741 [2024-11-27 07:28:51.907498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.741 qpair failed and we were unable to recover it. 00:33:40.741 [2024-11-27 07:28:51.907851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.741 [2024-11-27 07:28:51.907880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.741 qpair failed and we were unable to recover it. 00:33:40.741 [2024-11-27 07:28:51.908241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.741 [2024-11-27 07:28:51.908271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.741 qpair failed and we were unable to recover it. 00:33:40.741 [2024-11-27 07:28:51.908645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.741 [2024-11-27 07:28:51.908673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.741 qpair failed and we were unable to recover it. 00:33:40.741 [2024-11-27 07:28:51.909023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.741 [2024-11-27 07:28:51.909052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.741 qpair failed and we were unable to recover it. 00:33:40.741 [2024-11-27 07:28:51.909420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.741 [2024-11-27 07:28:51.909452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.741 qpair failed and we were unable to recover it. 00:33:40.741 [2024-11-27 07:28:51.909801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.741 [2024-11-27 07:28:51.909831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.741 qpair failed and we were unable to recover it. 00:33:40.741 [2024-11-27 07:28:51.910098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.741 [2024-11-27 07:28:51.910126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.741 qpair failed and we were unable to recover it. 00:33:40.741 [2024-11-27 07:28:51.910521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.741 [2024-11-27 07:28:51.910552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.741 qpair failed and we were unable to recover it. 00:33:40.741 [2024-11-27 07:28:51.910911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.741 [2024-11-27 07:28:51.910939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.741 qpair failed and we were unable to recover it. 00:33:40.741 [2024-11-27 07:28:51.911311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.741 [2024-11-27 07:28:51.911342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.741 qpair failed and we were unable to recover it. 00:33:40.741 [2024-11-27 07:28:51.911700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.741 [2024-11-27 07:28:51.911728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.741 qpair failed and we were unable to recover it. 00:33:40.741 [2024-11-27 07:28:51.912087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.741 [2024-11-27 07:28:51.912117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.741 qpair failed and we were unable to recover it. 00:33:40.741 [2024-11-27 07:28:51.912470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.741 [2024-11-27 07:28:51.912501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.741 qpair failed and we were unable to recover it. 00:33:40.741 [2024-11-27 07:28:51.912868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.741 [2024-11-27 07:28:51.912898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.741 qpair failed and we were unable to recover it. 00:33:40.741 [2024-11-27 07:28:51.913280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.741 [2024-11-27 07:28:51.913310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.741 qpair failed and we were unable to recover it. 00:33:40.741 [2024-11-27 07:28:51.913676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.741 [2024-11-27 07:28:51.913704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.741 qpair failed and we were unable to recover it. 00:33:40.741 [2024-11-27 07:28:51.914066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.741 [2024-11-27 07:28:51.914095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.741 qpair failed and we were unable to recover it. 00:33:40.741 [2024-11-27 07:28:51.914479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.741 [2024-11-27 07:28:51.914510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.741 qpair failed and we were unable to recover it. 00:33:40.741 [2024-11-27 07:28:51.914837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.741 [2024-11-27 07:28:51.914866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.741 qpair failed and we were unable to recover it. 00:33:40.741 [2024-11-27 07:28:51.915209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.742 [2024-11-27 07:28:51.915240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.742 qpair failed and we were unable to recover it. 00:33:40.742 [2024-11-27 07:28:51.915719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.742 [2024-11-27 07:28:51.915748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.742 qpair failed and we were unable to recover it. 00:33:40.742 [2024-11-27 07:28:51.916108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.742 [2024-11-27 07:28:51.916137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.742 qpair failed and we were unable to recover it. 00:33:40.742 [2024-11-27 07:28:51.916497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.742 [2024-11-27 07:28:51.916527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.742 qpair failed and we were unable to recover it. 00:33:40.742 [2024-11-27 07:28:51.916903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.742 [2024-11-27 07:28:51.916933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.742 qpair failed and we were unable to recover it. 00:33:40.742 [2024-11-27 07:28:51.917277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.742 [2024-11-27 07:28:51.917308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.742 qpair failed and we were unable to recover it. 00:33:40.742 [2024-11-27 07:28:51.917670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.742 [2024-11-27 07:28:51.917704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.742 qpair failed and we were unable to recover it. 00:33:40.742 [2024-11-27 07:28:51.918071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.742 [2024-11-27 07:28:51.918099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.742 qpair failed and we were unable to recover it. 00:33:40.742 [2024-11-27 07:28:51.918445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.742 [2024-11-27 07:28:51.918477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.742 qpair failed and we were unable to recover it. 00:33:40.742 [2024-11-27 07:28:51.918836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.742 [2024-11-27 07:28:51.918867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.742 qpair failed and we were unable to recover it. 00:33:40.742 [2024-11-27 07:28:51.919116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.742 [2024-11-27 07:28:51.919144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.742 qpair failed and we were unable to recover it. 00:33:40.742 [2024-11-27 07:28:51.919529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.742 [2024-11-27 07:28:51.919558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.742 qpair failed and we were unable to recover it. 00:33:40.742 [2024-11-27 07:28:51.919918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.742 [2024-11-27 07:28:51.919948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.742 qpair failed and we were unable to recover it. 00:33:40.742 [2024-11-27 07:28:51.920308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.742 [2024-11-27 07:28:51.920338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.742 qpair failed and we were unable to recover it. 00:33:40.742 [2024-11-27 07:28:51.920696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.742 [2024-11-27 07:28:51.920725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.742 qpair failed and we were unable to recover it. 00:33:40.742 [2024-11-27 07:28:51.921177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.742 [2024-11-27 07:28:51.921208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.742 qpair failed and we were unable to recover it. 00:33:40.742 [2024-11-27 07:28:51.921565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.742 [2024-11-27 07:28:51.921595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.742 qpair failed and we were unable to recover it. 00:33:40.742 [2024-11-27 07:28:51.921954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.742 [2024-11-27 07:28:51.921982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.742 qpair failed and we were unable to recover it. 00:33:40.742 [2024-11-27 07:28:51.922344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.742 [2024-11-27 07:28:51.922375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.742 qpair failed and we were unable to recover it. 00:33:40.742 [2024-11-27 07:28:51.922761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.742 [2024-11-27 07:28:51.922790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.742 qpair failed and we were unable to recover it. 00:33:40.742 [2024-11-27 07:28:51.923168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.742 [2024-11-27 07:28:51.923200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.742 qpair failed and we were unable to recover it. 00:33:40.742 [2024-11-27 07:28:51.923560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.742 [2024-11-27 07:28:51.923593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.742 qpair failed and we were unable to recover it. 00:33:40.742 [2024-11-27 07:28:51.923965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.742 [2024-11-27 07:28:51.923995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.742 qpair failed and we were unable to recover it. 00:33:40.742 [2024-11-27 07:28:51.924361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.742 [2024-11-27 07:28:51.924391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:40.742 qpair failed and we were unable to recover it. 00:33:41.014 [2024-11-27 07:28:51.924745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.015 [2024-11-27 07:28:51.924780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.015 qpair failed and we were unable to recover it. 00:33:41.015 [2024-11-27 07:28:51.925137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.015 [2024-11-27 07:28:51.925179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.015 qpair failed and we were unable to recover it. 00:33:41.015 [2024-11-27 07:28:51.925553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.015 [2024-11-27 07:28:51.925581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.015 qpair failed and we were unable to recover it. 00:33:41.015 [2024-11-27 07:28:51.925826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.015 [2024-11-27 07:28:51.925855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.015 qpair failed and we were unable to recover it. 00:33:41.015 [2024-11-27 07:28:51.926213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.015 [2024-11-27 07:28:51.926243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.015 qpair failed and we were unable to recover it. 00:33:41.015 [2024-11-27 07:28:51.926588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.015 [2024-11-27 07:28:51.926618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.015 qpair failed and we were unable to recover it. 00:33:41.015 [2024-11-27 07:28:51.926974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.015 [2024-11-27 07:28:51.927003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.015 qpair failed and we were unable to recover it. 00:33:41.015 [2024-11-27 07:28:51.927377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.015 [2024-11-27 07:28:51.927407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.015 qpair failed and we were unable to recover it. 00:33:41.015 [2024-11-27 07:28:51.927770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.015 [2024-11-27 07:28:51.927799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.015 qpair failed and we were unable to recover it. 00:33:41.015 [2024-11-27 07:28:51.928151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.015 [2024-11-27 07:28:51.928209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.015 qpair failed and we were unable to recover it. 00:33:41.015 [2024-11-27 07:28:51.928492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.015 [2024-11-27 07:28:51.928520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.015 qpair failed and we were unable to recover it. 00:33:41.015 [2024-11-27 07:28:51.928887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.015 [2024-11-27 07:28:51.928916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.015 qpair failed and we were unable to recover it. 00:33:41.015 [2024-11-27 07:28:51.929174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.015 [2024-11-27 07:28:51.929204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.015 qpair failed and we were unable to recover it. 00:33:41.015 [2024-11-27 07:28:51.929568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.015 [2024-11-27 07:28:51.929596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.015 qpair failed and we were unable to recover it. 00:33:41.015 [2024-11-27 07:28:51.929957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.015 [2024-11-27 07:28:51.929988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.015 qpair failed and we were unable to recover it. 00:33:41.015 [2024-11-27 07:28:51.930337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.015 [2024-11-27 07:28:51.930368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.015 qpair failed and we were unable to recover it. 00:33:41.015 [2024-11-27 07:28:51.930758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.015 [2024-11-27 07:28:51.930787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.015 qpair failed and we were unable to recover it. 00:33:41.015 [2024-11-27 07:28:51.931144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.015 [2024-11-27 07:28:51.931185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.015 qpair failed and we were unable to recover it. 00:33:41.015 [2024-11-27 07:28:51.931531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.015 [2024-11-27 07:28:51.931561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.015 qpair failed and we were unable to recover it. 00:33:41.015 [2024-11-27 07:28:51.931935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.015 [2024-11-27 07:28:51.931963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.015 qpair failed and we were unable to recover it. 00:33:41.015 [2024-11-27 07:28:51.932329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.015 [2024-11-27 07:28:51.932358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.015 qpair failed and we were unable to recover it. 00:33:41.015 [2024-11-27 07:28:51.932722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.015 [2024-11-27 07:28:51.932752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.015 qpair failed and we were unable to recover it. 00:33:41.015 [2024-11-27 07:28:51.933120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.015 [2024-11-27 07:28:51.933149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.015 qpair failed and we were unable to recover it. 00:33:41.015 [2024-11-27 07:28:51.933593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.015 [2024-11-27 07:28:51.933623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.015 qpair failed and we were unable to recover it. 00:33:41.015 [2024-11-27 07:28:51.933962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.015 [2024-11-27 07:28:51.933991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.015 qpair failed and we were unable to recover it. 00:33:41.015 [2024-11-27 07:28:51.934348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.015 [2024-11-27 07:28:51.934379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.015 qpair failed and we were unable to recover it. 00:33:41.015 [2024-11-27 07:28:51.934715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.015 [2024-11-27 07:28:51.934745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.015 qpair failed and we were unable to recover it. 00:33:41.015 [2024-11-27 07:28:51.935123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.015 [2024-11-27 07:28:51.935152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.015 qpair failed and we were unable to recover it. 00:33:41.015 [2024-11-27 07:28:51.935528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.015 [2024-11-27 07:28:51.935558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.015 qpair failed and we were unable to recover it. 00:33:41.015 [2024-11-27 07:28:51.935924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.015 [2024-11-27 07:28:51.935953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.015 qpair failed and we were unable to recover it. 00:33:41.015 [2024-11-27 07:28:51.936210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.015 [2024-11-27 07:28:51.936239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.015 qpair failed and we were unable to recover it. 00:33:41.015 [2024-11-27 07:28:51.936660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.015 [2024-11-27 07:28:51.936688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.015 qpair failed and we were unable to recover it. 00:33:41.015 [2024-11-27 07:28:51.937049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.015 [2024-11-27 07:28:51.937080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.015 qpair failed and we were unable to recover it. 00:33:41.015 [2024-11-27 07:28:51.937518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.015 [2024-11-27 07:28:51.937547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.015 qpair failed and we were unable to recover it. 00:33:41.015 [2024-11-27 07:28:51.937889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.015 [2024-11-27 07:28:51.937920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.015 qpair failed and we were unable to recover it. 00:33:41.015 [2024-11-27 07:28:51.938284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.015 [2024-11-27 07:28:51.938314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.015 qpair failed and we were unable to recover it. 00:33:41.015 [2024-11-27 07:28:51.938629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.015 [2024-11-27 07:28:51.938658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.015 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.939023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.016 [2024-11-27 07:28:51.939053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.016 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.939387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.016 [2024-11-27 07:28:51.939418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.016 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.939771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.016 [2024-11-27 07:28:51.939800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.016 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.940172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.016 [2024-11-27 07:28:51.940203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.016 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.940600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.016 [2024-11-27 07:28:51.940628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.016 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.940990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.016 [2024-11-27 07:28:51.941018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.016 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.941282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.016 [2024-11-27 07:28:51.941312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.016 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.941698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.016 [2024-11-27 07:28:51.941727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.016 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.942094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.016 [2024-11-27 07:28:51.942123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.016 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.942409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.016 [2024-11-27 07:28:51.942443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.016 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.942794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.016 [2024-11-27 07:28:51.942823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.016 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.943188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.016 [2024-11-27 07:28:51.943219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.016 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.943595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.016 [2024-11-27 07:28:51.943626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.016 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.943980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.016 [2024-11-27 07:28:51.944012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.016 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.944393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.016 [2024-11-27 07:28:51.944423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.016 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.944757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.016 [2024-11-27 07:28:51.944786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.016 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.945151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.016 [2024-11-27 07:28:51.945190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.016 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.945533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.016 [2024-11-27 07:28:51.945563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.016 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.945921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.016 [2024-11-27 07:28:51.945952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.016 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.946316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.016 [2024-11-27 07:28:51.946346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.016 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.946716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.016 [2024-11-27 07:28:51.946745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.016 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.947096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.016 [2024-11-27 07:28:51.947125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.016 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.947503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.016 [2024-11-27 07:28:51.947533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.016 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.947879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.016 [2024-11-27 07:28:51.947911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.016 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.948275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.016 [2024-11-27 07:28:51.948306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.016 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.948672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.016 [2024-11-27 07:28:51.948701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.016 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.949051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.016 [2024-11-27 07:28:51.949080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.016 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.949482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.016 [2024-11-27 07:28:51.949513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.016 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.949926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.016 [2024-11-27 07:28:51.949955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.016 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.950307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.016 [2024-11-27 07:28:51.950337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.016 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.950668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.016 [2024-11-27 07:28:51.950697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.016 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.950942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.016 [2024-11-27 07:28:51.950975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.016 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.951333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.016 [2024-11-27 07:28:51.951363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.016 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.951793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.016 [2024-11-27 07:28:51.951823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.016 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.952180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.016 [2024-11-27 07:28:51.952212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.016 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.952600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.016 [2024-11-27 07:28:51.952630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.016 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.952991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.016 [2024-11-27 07:28:51.953020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.016 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.953405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.016 [2024-11-27 07:28:51.953437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.016 qpair failed and we were unable to recover it. 00:33:41.016 [2024-11-27 07:28:51.953830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.017 [2024-11-27 07:28:51.953858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.017 qpair failed and we were unable to recover it. 00:33:41.017 [2024-11-27 07:28:51.954213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.017 [2024-11-27 07:28:51.954243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.017 qpair failed and we were unable to recover it. 00:33:41.017 [2024-11-27 07:28:51.954615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.017 [2024-11-27 07:28:51.954652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.017 qpair failed and we were unable to recover it. 00:33:41.017 [2024-11-27 07:28:51.955008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.017 [2024-11-27 07:28:51.955038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.017 qpair failed and we were unable to recover it. 00:33:41.017 [2024-11-27 07:28:51.955292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.017 [2024-11-27 07:28:51.955325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.017 qpair failed and we were unable to recover it. 00:33:41.017 [2024-11-27 07:28:51.955700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.017 [2024-11-27 07:28:51.955729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.017 qpair failed and we were unable to recover it. 00:33:41.017 [2024-11-27 07:28:51.955993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.017 [2024-11-27 07:28:51.956021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.017 qpair failed and we were unable to recover it. 00:33:41.017 [2024-11-27 07:28:51.956390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.017 [2024-11-27 07:28:51.956420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.017 qpair failed and we were unable to recover it. 00:33:41.017 [2024-11-27 07:28:51.956785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.017 [2024-11-27 07:28:51.956816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.017 qpair failed and we were unable to recover it. 00:33:41.017 [2024-11-27 07:28:51.957067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.017 [2024-11-27 07:28:51.957096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.017 qpair failed and we were unable to recover it. 00:33:41.017 [2024-11-27 07:28:51.957481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.017 [2024-11-27 07:28:51.957512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.017 qpair failed and we were unable to recover it. 00:33:41.017 [2024-11-27 07:28:51.957867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.017 [2024-11-27 07:28:51.957896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.017 qpair failed and we were unable to recover it. 00:33:41.017 [2024-11-27 07:28:51.958253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.017 [2024-11-27 07:28:51.958283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.017 qpair failed and we were unable to recover it. 00:33:41.017 [2024-11-27 07:28:51.958541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.017 [2024-11-27 07:28:51.958569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.017 qpair failed and we were unable to recover it. 00:33:41.017 [2024-11-27 07:28:51.958937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.017 [2024-11-27 07:28:51.958965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.017 qpair failed and we were unable to recover it. 00:33:41.017 [2024-11-27 07:28:51.959307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.017 [2024-11-27 07:28:51.959336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.017 qpair failed and we were unable to recover it. 00:33:41.017 [2024-11-27 07:28:51.959691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.017 [2024-11-27 07:28:51.959721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.017 qpair failed and we were unable to recover it. 00:33:41.017 [2024-11-27 07:28:51.960129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.017 [2024-11-27 07:28:51.960157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.017 qpair failed and we were unable to recover it. 00:33:41.017 [2024-11-27 07:28:51.960550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.017 [2024-11-27 07:28:51.960578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.017 qpair failed and we were unable to recover it. 00:33:41.017 [2024-11-27 07:28:51.960995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.017 [2024-11-27 07:28:51.961024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.017 qpair failed and we were unable to recover it. 00:33:41.017 [2024-11-27 07:28:51.961394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.017 [2024-11-27 07:28:51.961425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.017 qpair failed and we were unable to recover it. 00:33:41.017 [2024-11-27 07:28:51.961788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.017 [2024-11-27 07:28:51.961818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.017 qpair failed and we were unable to recover it. 00:33:41.017 [2024-11-27 07:28:51.962192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.017 [2024-11-27 07:28:51.962222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.017 qpair failed and we were unable to recover it. 00:33:41.017 [2024-11-27 07:28:51.962580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.017 [2024-11-27 07:28:51.962607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.017 qpair failed and we were unable to recover it. 00:33:41.017 [2024-11-27 07:28:51.962975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.017 [2024-11-27 07:28:51.963003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.017 qpair failed and we were unable to recover it. 00:33:41.017 [2024-11-27 07:28:51.963384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.017 [2024-11-27 07:28:51.963414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.017 qpair failed and we were unable to recover it. 00:33:41.017 [2024-11-27 07:28:51.963838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.017 [2024-11-27 07:28:51.963868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.017 qpair failed and we were unable to recover it. 00:33:41.017 [2024-11-27 07:28:51.964239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.017 [2024-11-27 07:28:51.964270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.017 qpair failed and we were unable to recover it. 00:33:41.017 [2024-11-27 07:28:51.964543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.017 [2024-11-27 07:28:51.964575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.017 qpair failed and we were unable to recover it. 00:33:41.017 [2024-11-27 07:28:51.964947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.017 [2024-11-27 07:28:51.964983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.017 qpair failed and we were unable to recover it. 00:33:41.017 [2024-11-27 07:28:51.965343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.017 [2024-11-27 07:28:51.965375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.017 qpair failed and we were unable to recover it. 00:33:41.017 [2024-11-27 07:28:51.965737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.017 [2024-11-27 07:28:51.965765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.017 qpair failed and we were unable to recover it. 00:33:41.017 [2024-11-27 07:28:51.966126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.017 [2024-11-27 07:28:51.966157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.017 qpair failed and we were unable to recover it. 00:33:41.017 [2024-11-27 07:28:51.966513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.017 [2024-11-27 07:28:51.966543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.017 qpair failed and we were unable to recover it. 00:33:41.017 [2024-11-27 07:28:51.966898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.017 [2024-11-27 07:28:51.966927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.017 qpair failed and we were unable to recover it. 00:33:41.017 [2024-11-27 07:28:51.967289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.017 [2024-11-27 07:28:51.967318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.017 qpair failed and we were unable to recover it. 00:33:41.017 [2024-11-27 07:28:51.967582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.017 [2024-11-27 07:28:51.967611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.017 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.968012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.968041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.968403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.968436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.968798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.968827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.969194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.969225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.969463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.969496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.969842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.969871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.970124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.970153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.970557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.970586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.970948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.970979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.971341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.971372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.971734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.971763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.972119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.972148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.972521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.972551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.972910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.972940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.973311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.973343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.973685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.973716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.974055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.974084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.974431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.974461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.974828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.974859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.975238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.975276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.975628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.975659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.976020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.976050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.976391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.976423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.976801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.976830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.977205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.977235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.977587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.977617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.977984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.978013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.978390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.978420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.978779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.978808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.979179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.979209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.979567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.979597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.979952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.979980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.980344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.980377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.980720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.980750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.981115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.981143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.981512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.981542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.981891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.981923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.982280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.982312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.982676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.982706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.983060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.983091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.983446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.983477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.983833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.983867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.984253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.984284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.986690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.986773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.987228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.987269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.987639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.987669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.988012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.988041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.988412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.988445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.018 [2024-11-27 07:28:51.988826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.018 [2024-11-27 07:28:51.988855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.018 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:51.989216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:51.989250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:51.989604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:51.989632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:51.989995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:51.990027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:51.990390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:51.990421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:51.990761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:51.990793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:51.991137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:51.991184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:51.991586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:51.991616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:51.991958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:51.991987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:51.992324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:51.992355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:51.992653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:51.992681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:51.993060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:51.993088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:51.993457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:51.993489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:51.993855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:51.993889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:51.994252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:51.994282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:51.994557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:51.994585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:51.994927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:51.994958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:51.995314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:51.995346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:51.995711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:51.995739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:51.996002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:51.996036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:51.996399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:51.996431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:51.996808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:51.996837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:51.997208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:51.997239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:51.997606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:51.997636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:51.997984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:51.998014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:51.998386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:51.998417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:51.998764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:51.998795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:51.999153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:51.999197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:51.999523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:51.999554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:51.999891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:51.999922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:52.000205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:52.000240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:52.000605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:52.000635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:52.001006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:52.001039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:52.001392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:52.001423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:52.001778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:52.001808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:52.002183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:52.002215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:52.002557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:52.002589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:52.002937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:52.002967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:52.003305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:52.003335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:52.003711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:52.003746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:52.004103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:52.004133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:52.004524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:52.004555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:52.004930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:52.004962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:52.005327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:52.005358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:52.005729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:52.005760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:52.006095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:52.006124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:52.006587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:52.006620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:52.006956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:52.006987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:52.007365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:52.007396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:52.007749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:52.007780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:52.008134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:52.008176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:52.009942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:52.010011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:52.010409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:52.010448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:52.010826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:52.010857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:52.011216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:52.011249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:52.011586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:52.011617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:52.011980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:52.012009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:52.012361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:52.012396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:52.012757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:52.012788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:52.013073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:52.013101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.019 [2024-11-27 07:28:52.013480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.019 [2024-11-27 07:28:52.013511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.019 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.013871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.013903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.014264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.014297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.014629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.014659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.015026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.015056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.015431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.015472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.015855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.015891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.016207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.016239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.016604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.016634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.016993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.017025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.017387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.017421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.017772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.017802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.018182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.018213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.018590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.018620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.018974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.019004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.019264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.019296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.019720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.019752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.020094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.020124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.020488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.020520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.020922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.020951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.021380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.021412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.021787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.021820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.022232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.022267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.022632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.022663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.023043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.023077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.023413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.023445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.023798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.023829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.024075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.024107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.028198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.028272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.028698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.028736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.029266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.029306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.029628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.029666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.029987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.030024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.030438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.030480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.030899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.030938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.031309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.031345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.031725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.031757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.032184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.032220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.032596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.032629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.033005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.033041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.033394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.033428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.033807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.033838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.034194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.034226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.034643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.034676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.035095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.035129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.035578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.035610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.036013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.036036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.036389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.036412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.036805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.036827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.037217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.037241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.037612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.037635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.038033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.038055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.038767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.038798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.039177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.039204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.039569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.039598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.039967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.039993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.040513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.040538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.040786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.040808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.041147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.041188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.041541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.041567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.041801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.041827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.042244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.042271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.042524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.042550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.042826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.042847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.043201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.043224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.043586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.043607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.043951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.043972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.044361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.044384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.044746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.020 [2024-11-27 07:28:52.044768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.020 qpair failed and we were unable to recover it. 00:33:41.020 [2024-11-27 07:28:52.045100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.045122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.045452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.045475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.045677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.045701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.046071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.046095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.046442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.046465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.046701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.046725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.047075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.047095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.047413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.047434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.047780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.047800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.048137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.048169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.048509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.048528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.048908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.048928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.049270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.049295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.049697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.049716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.050047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.050066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.050288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.050307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.050651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.050671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.050985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.051006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.051333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.051353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.051679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.051698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.052021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.052039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.052346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.052366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.052692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.052712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.053029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.053049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.053391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.053411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.053817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.053835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.054179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.054199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.054522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.054540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.054872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.054892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.055226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.055250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.055586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.055605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.055930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.055949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.056281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.056305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.056636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.056655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.056875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.056896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.057259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.057279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.057645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.057671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.058020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.058045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.058295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.058324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.058676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.058701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.059053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.059078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.059323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.059350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.059718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.059743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.060116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.060141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.060570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.060597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.060836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.060861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.061221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.061249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.061611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.061639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.062007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.062032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.062303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.062330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.062574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.062604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.062997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.063022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.063395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.063423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.063773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.063800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.064180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.064208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.064555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.064581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.064962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.064988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.065385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.065412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.065663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.065692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.066049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.066082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.066487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.021 [2024-11-27 07:28:52.066514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.021 qpair failed and we were unable to recover it. 00:33:41.021 [2024-11-27 07:28:52.066883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.066908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.067255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.067282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.067672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.067698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.068065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.068095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.068474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.068505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.068871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.068901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.069237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.069267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.069626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.069656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.069937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.069966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.070195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.070228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.070620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.070651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.071017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.071046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.071396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.071429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.071789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.071818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.072201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.072231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.072639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.072668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.073050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.073079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.073440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.073469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.073830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.073859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.074223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.074253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.074603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.074631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.074967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.074997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.075356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.075387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.075738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.075767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.076200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.076231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.076601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.076630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.076973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.077003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.077283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.077313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.077671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.077700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.078050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.078080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.078423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.078454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.078794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.078824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.079068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.079097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.079458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.079490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.079865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.079896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.080254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.080284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.080631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.080662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.081017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.081046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.081427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.081457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.081807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.081840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.082237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.082268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.082627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.082655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.083009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.083046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.083394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.083425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.083781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.083811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.084199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.084235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.084581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.084610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.084974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.085003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.085373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.085402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.085770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.085799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.086170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.086200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.086560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.086592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.086936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.086966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.087210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.087244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.087612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.087643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.088003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.088031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.088388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.088418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.088778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.088808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.089179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.089209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.089569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.089597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.089951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.089980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.090326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.090356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.090705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.090735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.091100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.091130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.091343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.091374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.091609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.091641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.091989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.092025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.092388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.092419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.092764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.092795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.093157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.022 [2024-11-27 07:28:52.093208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.022 qpair failed and we were unable to recover it. 00:33:41.022 [2024-11-27 07:28:52.093609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.093639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.093908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.093941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.094195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.094225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.094584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.094614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.095019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.095048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.095293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.095324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.095722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.095752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.096141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.096185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.096566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.096595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.096959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.096987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.097328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.097359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.097734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.097765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.098132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.098171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.098522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.098551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.098878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.098907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.099280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.099309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.099660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.099688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.100051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.100081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.100451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.100483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.100850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.100879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.101241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.101271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.101543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.101572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.101951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.101982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.102244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.102279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.102633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.102662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.103021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.103050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.103407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.103438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.103800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.103828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.104193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.104223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.104585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.104616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.104972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.105002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.105358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.105388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.105777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.105805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.106087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.106115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.106526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.106557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.106931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.106959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.107330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.107359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.107717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.107747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.108108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.108136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.108529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.108559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.108924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.108954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.109333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.109365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.109604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.109632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.109977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.110005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.110338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.110368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.110702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.110731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.111071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.111101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.111481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.111514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.111875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.111904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.112274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.112305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.112702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.112731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.113088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.113116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.113382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.113417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.113772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.113803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.114174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.114206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.114571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.114600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.114962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.114991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.115373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.115404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.115795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.115825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.116180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.116211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.116581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.116609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.116961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.116990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.117301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.117333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.117701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.117730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.118092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.118122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.118486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.118516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.118876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.118904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.119280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.119310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.119678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.119707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.023 qpair failed and we were unable to recover it. 00:33:41.023 [2024-11-27 07:28:52.120076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.023 [2024-11-27 07:28:52.120105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.120502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.120534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.120888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.120916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.121278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.121309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.121673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.121702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.122045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.122074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.122411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.122441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.122792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.122823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.123202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.123232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.123613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.123656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.123990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.124019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.124388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.124422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.124680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.124712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.125084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.125113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.125477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.125510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.125858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.125888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.126256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.126286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.126627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.126657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.127018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.127046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.127398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.127428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.127783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.127812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.128181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.128211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.128577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.128613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.128977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.129006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.129367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.129397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.129800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.129830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.130185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.130215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.130601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.130630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.130990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.131021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.131403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.131435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.131831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.131860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.132222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.132252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.132618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.132649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.133009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.133038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.133405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.133436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.133797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.133826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.134191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.134221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.134581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.134610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.134975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.135003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.135379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.135409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.135757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.135786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.136142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.136205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.136578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.136608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.136962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.136991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.137327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.137357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.137630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.137659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.138033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.138062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.138410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.138440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.138685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.138717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.139054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.139090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.139537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.139568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.139914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.139943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.140305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.140335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.140697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.140726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.141089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.141118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.141480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.141511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.141887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.141916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.142276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.142307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.142669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.142698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.143054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.143084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.143436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.143467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.024 [2024-11-27 07:28:52.143826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.024 [2024-11-27 07:28:52.143856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.024 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.144225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.144257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.144614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.144643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.145009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.145038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.145382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.145413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.145780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.145811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.146182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.146213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.146568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.146596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.146959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.146989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.147346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.147377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.147707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.147737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.148098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.148126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.148485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.148517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.148888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.148918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.149169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.149202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.149607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.149642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.150016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.150046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.150419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.150450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.150813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.150842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.151212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.151245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.151495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.151527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.151877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.151907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.152270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.152300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.152674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.152702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.153049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.153080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.153443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.153474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.153726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.153754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.154104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.154132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.154493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.154523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.154859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.154889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.155130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.155174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.155567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.155596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.155976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.156005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.156277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.156306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.156667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.156696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.157109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.157138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.157501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.157532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.157898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.157927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.158287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.158317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.158695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.158724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.159077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.159105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.159531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.159563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.159927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.159956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.160305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.160336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.160690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.160719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.161072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.161100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.161475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.161505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.161864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.161895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.162253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.162283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.162651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.162680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.163046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.163075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.163424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.163453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.163810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.163838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.164211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.164242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.164635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.164663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.165071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.165100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.165466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.165497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.165840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.165869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.166107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.166140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.166525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.166554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.166913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.166942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.167307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.167337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.167735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.167763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.168131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.168174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.168523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.168555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.168920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.168949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.169326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.169357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.169724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.169753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.170131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.170182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.170587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.025 [2024-11-27 07:28:52.170618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.025 qpair failed and we were unable to recover it. 00:33:41.025 [2024-11-27 07:28:52.171001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.171031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.171402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.171434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.171794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.171822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.172062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.172094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.172454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.172484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.172846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.172875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.173223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.173255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.173624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.173653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.174008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.174036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.174415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.174445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.174813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.174842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.175207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.175238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.175603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.175631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.176033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.176068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.176403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.176434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.176798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.176826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.177226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.177256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.177622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.177654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.178005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.178034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.178385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.178415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.178774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.178802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.179149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.179192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.179557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.179586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.179953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.179981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.180360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.180390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.180697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.180726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.181092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.181121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.181523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.181553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.181893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.181923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.182293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.182324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.182706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.182735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.183100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.183128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.183378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.183412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.183809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.183838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.184208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.184241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.184628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.184657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.185015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.185043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.185391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.185421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.185773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.185801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.186222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.186253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.186619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.186655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.187035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.187065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.187419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.187449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.187862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.187891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.188249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.188279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.188631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.188659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.189093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.189123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.189507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.189538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.189788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.189815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.190178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.190209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.190574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.190603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.191007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.191037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.191405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.191437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.191801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.191829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.192205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.192236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.192530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.192558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.192936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.192965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.193374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.193405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.193776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.193806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.194174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.194204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.194595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.194624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.194986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.195017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.195395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.195425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.195825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.026 [2024-11-27 07:28:52.195857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.026 qpair failed and we were unable to recover it. 00:33:41.026 [2024-11-27 07:28:52.196205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.027 [2024-11-27 07:28:52.196235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.027 qpair failed and we were unable to recover it. 00:33:41.027 [2024-11-27 07:28:52.196520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.027 [2024-11-27 07:28:52.196549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.027 qpair failed and we were unable to recover it. 00:33:41.027 [2024-11-27 07:28:52.196895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.027 [2024-11-27 07:28:52.196923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.027 qpair failed and we were unable to recover it. 00:33:41.027 [2024-11-27 07:28:52.197287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.027 [2024-11-27 07:28:52.197317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.027 qpair failed and we were unable to recover it. 00:33:41.027 [2024-11-27 07:28:52.197683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.027 [2024-11-27 07:28:52.197714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.027 qpair failed and we were unable to recover it. 00:33:41.027 [2024-11-27 07:28:52.198077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.027 [2024-11-27 07:28:52.198107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.027 qpair failed and we were unable to recover it. 00:33:41.027 [2024-11-27 07:28:52.198477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.027 [2024-11-27 07:28:52.198508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.027 qpair failed and we were unable to recover it. 00:33:41.027 [2024-11-27 07:28:52.198877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.027 [2024-11-27 07:28:52.198908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.027 qpair failed and we were unable to recover it. 00:33:41.027 [2024-11-27 07:28:52.199272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.027 [2024-11-27 07:28:52.199302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.027 qpair failed and we were unable to recover it. 00:33:41.027 [2024-11-27 07:28:52.199685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.027 [2024-11-27 07:28:52.199713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.027 qpair failed and we were unable to recover it. 00:33:41.027 [2024-11-27 07:28:52.200063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.027 [2024-11-27 07:28:52.200095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.027 qpair failed and we were unable to recover it. 00:33:41.027 [2024-11-27 07:28:52.200456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.027 [2024-11-27 07:28:52.200486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.027 qpair failed and we were unable to recover it. 00:33:41.027 [2024-11-27 07:28:52.200853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.027 [2024-11-27 07:28:52.200882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.027 qpair failed and we were unable to recover it. 00:33:41.027 [2024-11-27 07:28:52.201237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.027 [2024-11-27 07:28:52.201266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.027 qpair failed and we were unable to recover it. 00:33:41.027 [2024-11-27 07:28:52.201594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.027 [2024-11-27 07:28:52.201623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.027 qpair failed and we were unable to recover it. 00:33:41.027 [2024-11-27 07:28:52.201988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.027 [2024-11-27 07:28:52.202018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.027 qpair failed and we were unable to recover it. 00:33:41.027 [2024-11-27 07:28:52.202389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.027 [2024-11-27 07:28:52.202419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.027 qpair failed and we were unable to recover it. 00:33:41.027 [2024-11-27 07:28:52.202780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.027 [2024-11-27 07:28:52.202809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.027 qpair failed and we were unable to recover it. 00:33:41.027 [2024-11-27 07:28:52.203184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.027 [2024-11-27 07:28:52.203214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.027 qpair failed and we were unable to recover it. 00:33:41.027 [2024-11-27 07:28:52.203466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.027 [2024-11-27 07:28:52.203494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.027 qpair failed and we were unable to recover it. 00:33:41.027 [2024-11-27 07:28:52.203834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.027 [2024-11-27 07:28:52.203864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.027 qpair failed and we were unable to recover it. 00:33:41.027 [2024-11-27 07:28:52.204121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.027 [2024-11-27 07:28:52.204150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.027 qpair failed and we were unable to recover it. 00:33:41.027 [2024-11-27 07:28:52.204550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.027 [2024-11-27 07:28:52.204579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.027 qpair failed and we were unable to recover it. 00:33:41.027 [2024-11-27 07:28:52.204925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.027 [2024-11-27 07:28:52.204956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.027 qpair failed and we were unable to recover it. 00:33:41.027 [2024-11-27 07:28:52.205312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.027 [2024-11-27 07:28:52.205344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.027 qpair failed and we were unable to recover it. 00:33:41.027 [2024-11-27 07:28:52.205688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.027 [2024-11-27 07:28:52.205717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.027 qpair failed and we were unable to recover it. 00:33:41.027 [2024-11-27 07:28:52.206074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.027 [2024-11-27 07:28:52.206103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.027 qpair failed and we were unable to recover it. 00:33:41.027 [2024-11-27 07:28:52.206462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.027 [2024-11-27 07:28:52.206491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.027 qpair failed and we were unable to recover it. 00:33:41.027 [2024-11-27 07:28:52.206858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.027 [2024-11-27 07:28:52.206888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.027 qpair failed and we were unable to recover it. 00:33:41.301 [2024-11-27 07:28:52.207236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-11-27 07:28:52.207270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-11-27 07:28:52.207627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-11-27 07:28:52.207656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-11-27 07:28:52.208017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-11-27 07:28:52.208046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-11-27 07:28:52.208390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-11-27 07:28:52.208421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-11-27 07:28:52.208792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-11-27 07:28:52.208822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-11-27 07:28:52.209157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-11-27 07:28:52.209206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-11-27 07:28:52.211121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-11-27 07:28:52.211203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-11-27 07:28:52.211630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-11-27 07:28:52.211665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-11-27 07:28:52.212045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-11-27 07:28:52.212075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-11-27 07:28:52.212412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-11-27 07:28:52.212444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-11-27 07:28:52.212801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-11-27 07:28:52.212831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-11-27 07:28:52.213186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-11-27 07:28:52.213218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-11-27 07:28:52.213649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-11-27 07:28:52.213677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-11-27 07:28:52.214034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-11-27 07:28:52.214063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-11-27 07:28:52.214420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-11-27 07:28:52.214451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-11-27 07:28:52.214810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-11-27 07:28:52.214849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-11-27 07:28:52.215202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-11-27 07:28:52.215232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-11-27 07:28:52.215628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-11-27 07:28:52.215658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-11-27 07:28:52.216022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-11-27 07:28:52.216051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-11-27 07:28:52.216421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-11-27 07:28:52.216455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-11-27 07:28:52.216823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-11-27 07:28:52.216853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-11-27 07:28:52.217280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-11-27 07:28:52.217310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-11-27 07:28:52.217646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-11-27 07:28:52.217676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-11-27 07:28:52.218035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-11-27 07:28:52.218064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-11-27 07:28:52.218439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-11-27 07:28:52.218470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-11-27 07:28:52.218841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-11-27 07:28:52.218871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-11-27 07:28:52.219231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-11-27 07:28:52.219261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-11-27 07:28:52.219625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-11-27 07:28:52.219656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-11-27 07:28:52.220022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-11-27 07:28:52.220051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-11-27 07:28:52.220432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-11-27 07:28:52.220463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-11-27 07:28:52.220868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-11-27 07:28:52.220899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-11-27 07:28:52.221258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-11-27 07:28:52.221287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-11-27 07:28:52.221636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-11-27 07:28:52.221665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-11-27 07:28:52.222029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-11-27 07:28:52.222058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-11-27 07:28:52.222426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-11-27 07:28:52.222456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-11-27 07:28:52.222826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-11-27 07:28:52.222856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-11-27 07:28:52.223219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-11-27 07:28:52.223250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-11-27 07:28:52.223651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-11-27 07:28:52.223680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-11-27 07:28:52.224040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-11-27 07:28:52.224068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-11-27 07:28:52.224434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-11-27 07:28:52.224466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-11-27 07:28:52.224830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-11-27 07:28:52.224859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-11-27 07:28:52.225222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-11-27 07:28:52.225311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-11-27 07:28:52.225706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-11-27 07:28:52.225745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-11-27 07:28:52.225986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-11-27 07:28:52.226015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.226389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-11-27 07:28:52.226423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.226720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-11-27 07:28:52.226751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.227091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-11-27 07:28:52.227120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.227490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-11-27 07:28:52.227522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.227879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-11-27 07:28:52.227910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.228332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-11-27 07:28:52.228365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.228705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-11-27 07:28:52.228736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.229132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-11-27 07:28:52.229175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.229513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-11-27 07:28:52.229544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.229904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-11-27 07:28:52.229934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.230302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-11-27 07:28:52.230335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.230750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-11-27 07:28:52.230780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.231136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-11-27 07:28:52.231189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.231552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-11-27 07:28:52.231582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.231847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-11-27 07:28:52.231880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.232226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-11-27 07:28:52.232259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.232675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-11-27 07:28:52.232708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.233043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-11-27 07:28:52.233074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.234961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-11-27 07:28:52.235025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.235434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-11-27 07:28:52.235468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.235846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-11-27 07:28:52.235876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.236238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-11-27 07:28:52.236269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.236706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-11-27 07:28:52.236735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.237090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-11-27 07:28:52.237119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.237560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-11-27 07:28:52.237592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.237942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-11-27 07:28:52.237981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.238352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-11-27 07:28:52.238384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.238760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-11-27 07:28:52.238789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.239147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-11-27 07:28:52.239203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.239575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-11-27 07:28:52.239606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.239961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-11-27 07:28:52.239990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.240345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-11-27 07:28:52.240377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.240742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-11-27 07:28:52.240772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.241137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-11-27 07:28:52.241180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.241539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-11-27 07:28:52.241569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.241815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-11-27 07:28:52.241845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.242190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-11-27 07:28:52.242223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.242577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-11-27 07:28:52.242609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-11-27 07:28:52.242963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.242994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-11-27 07:28:52.243267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.243299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-11-27 07:28:52.243664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.243693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-11-27 07:28:52.244044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.244074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-11-27 07:28:52.244429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.244462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-11-27 07:28:52.244821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.244849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-11-27 07:28:52.245241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.245273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-11-27 07:28:52.245674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.245705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-11-27 07:28:52.246064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.246095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-11-27 07:28:52.246461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.246494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-11-27 07:28:52.246792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.246822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-11-27 07:28:52.247197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.247231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-11-27 07:28:52.247576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.247606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-11-27 07:28:52.247948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.247977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-11-27 07:28:52.248348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.248381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-11-27 07:28:52.248755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.248787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-11-27 07:28:52.249156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.249202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-11-27 07:28:52.249546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.249577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-11-27 07:28:52.251830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.251902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-11-27 07:28:52.252341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.252380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-11-27 07:28:52.252638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.252669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-11-27 07:28:52.253067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.253097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-11-27 07:28:52.253464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.253498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-11-27 07:28:52.253860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.253891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-11-27 07:28:52.254237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.254269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-11-27 07:28:52.254655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.254687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-11-27 07:28:52.255054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.255083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-11-27 07:28:52.255453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.255490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-11-27 07:28:52.255867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.255900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-11-27 07:28:52.256335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.256367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-11-27 07:28:52.256748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.256778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-11-27 07:28:52.257170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.257203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-11-27 07:28:52.257572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.257601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-11-27 07:28:52.257955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.257984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-11-27 07:28:52.258364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.258396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-11-27 07:28:52.258755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.258784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-11-27 07:28:52.259144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.259187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-11-27 07:28:52.259431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.259465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-11-27 07:28:52.259847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-11-27 07:28:52.259877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.260239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-11-27 07:28:52.260271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.260629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-11-27 07:28:52.260661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.261023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-11-27 07:28:52.261053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.261412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-11-27 07:28:52.261443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.261820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-11-27 07:28:52.261853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.262215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-11-27 07:28:52.262247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.262629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-11-27 07:28:52.262660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.263036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-11-27 07:28:52.263065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.263397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-11-27 07:28:52.263429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.263784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-11-27 07:28:52.263814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.264187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-11-27 07:28:52.264222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.264590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-11-27 07:28:52.264621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.264868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-11-27 07:28:52.264901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.265241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-11-27 07:28:52.265273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.265604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-11-27 07:28:52.265636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.265999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-11-27 07:28:52.266032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.266392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-11-27 07:28:52.266430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.266813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-11-27 07:28:52.266846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.267116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-11-27 07:28:52.267145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.267536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-11-27 07:28:52.267567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.267929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-11-27 07:28:52.267958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.268320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-11-27 07:28:52.268351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.270706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-11-27 07:28:52.270780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.271238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-11-27 07:28:52.271278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.271648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-11-27 07:28:52.271682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.272029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-11-27 07:28:52.272061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.272317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-11-27 07:28:52.272349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.272696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-11-27 07:28:52.272726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.273097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-11-27 07:28:52.273132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.273507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-11-27 07:28:52.273540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.273928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-11-27 07:28:52.273959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.274331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-11-27 07:28:52.274362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.274739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-11-27 07:28:52.274770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.275128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-11-27 07:28:52.275170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.275606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-11-27 07:28:52.275641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.275876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-11-27 07:28:52.275908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.276298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-11-27 07:28:52.276330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.276690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-11-27 07:28:52.276720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-11-27 07:28:52.277081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-11-27 07:28:52.277110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-11-27 07:28:52.277471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-11-27 07:28:52.277503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-11-27 07:28:52.277862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-11-27 07:28:52.277893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-11-27 07:28:52.278243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-11-27 07:28:52.278274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-11-27 07:28:52.278719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-11-27 07:28:52.278751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-11-27 07:28:52.279117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-11-27 07:28:52.279154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-11-27 07:28:52.279549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-11-27 07:28:52.279578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-11-27 07:28:52.279943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-11-27 07:28:52.279972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-11-27 07:28:52.280338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-11-27 07:28:52.280373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-11-27 07:28:52.280719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-11-27 07:28:52.280753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-11-27 07:28:52.281087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-11-27 07:28:52.281120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-11-27 07:28:52.281365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-11-27 07:28:52.281395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-11-27 07:28:52.281754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-11-27 07:28:52.281784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-11-27 07:28:52.282142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-11-27 07:28:52.282189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-11-27 07:28:52.282552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-11-27 07:28:52.282585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-11-27 07:28:52.282948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-11-27 07:28:52.282981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-11-27 07:28:52.283343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-11-27 07:28:52.283372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-11-27 07:28:52.283761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-11-27 07:28:52.283791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-11-27 07:28:52.284145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-11-27 07:28:52.284189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-11-27 07:28:52.284571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-11-27 07:28:52.284602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-11-27 07:28:52.284946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-11-27 07:28:52.284979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-11-27 07:28:52.285344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-11-27 07:28:52.285375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-11-27 07:28:52.285719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-11-27 07:28:52.285749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-11-27 07:28:52.286105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-11-27 07:28:52.286133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-11-27 07:28:52.286547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-11-27 07:28:52.286576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-11-27 07:28:52.286939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-11-27 07:28:52.286969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-11-27 07:28:52.287333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-11-27 07:28:52.287363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-11-27 07:28:52.287627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-11-27 07:28:52.287660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-11-27 07:28:52.288027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-11-27 07:28:52.288056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-11-27 07:28:52.288427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-11-27 07:28:52.288459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-11-27 07:28:52.288896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-11-27 07:28:52.288925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-11-27 07:28:52.289302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-11-27 07:28:52.289332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-11-27 07:28:52.289701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-11-27 07:28:52.289732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-11-27 07:28:52.290094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-11-27 07:28:52.290123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-11-27 07:28:52.290496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-11-27 07:28:52.290527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-11-27 07:28:52.290904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-11-27 07:28:52.290934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-11-27 07:28:52.291297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-11-27 07:28:52.291327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-11-27 07:28:52.291707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-11-27 07:28:52.291736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-11-27 07:28:52.292140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-11-27 07:28:52.292180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-11-27 07:28:52.292508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-11-27 07:28:52.292537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-11-27 07:28:52.292878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-11-27 07:28:52.292906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-11-27 07:28:52.293268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-11-27 07:28:52.293299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-11-27 07:28:52.293651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-11-27 07:28:52.293680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-11-27 07:28:52.294045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-11-27 07:28:52.294074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-11-27 07:28:52.294380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-11-27 07:28:52.294409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-11-27 07:28:52.294784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-11-27 07:28:52.294813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-11-27 07:28:52.295183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-11-27 07:28:52.295214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-11-27 07:28:52.295572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-11-27 07:28:52.295601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-11-27 07:28:52.295973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-11-27 07:28:52.296003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-11-27 07:28:52.296385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-11-27 07:28:52.296414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-11-27 07:28:52.296764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-11-27 07:28:52.296794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-11-27 07:28:52.297156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-11-27 07:28:52.297200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-11-27 07:28:52.297539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-11-27 07:28:52.297568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-11-27 07:28:52.297927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-11-27 07:28:52.297957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-11-27 07:28:52.298329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-11-27 07:28:52.298359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-11-27 07:28:52.298700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-11-27 07:28:52.298730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-11-27 07:28:52.299085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-11-27 07:28:52.299114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-11-27 07:28:52.299368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-11-27 07:28:52.299403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-11-27 07:28:52.299844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-11-27 07:28:52.299873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-11-27 07:28:52.300244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-11-27 07:28:52.300274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-11-27 07:28:52.300654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-11-27 07:28:52.300685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-11-27 07:28:52.301022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-11-27 07:28:52.301052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-11-27 07:28:52.301317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-11-27 07:28:52.301348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-11-27 07:28:52.301698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-11-27 07:28:52.301726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-11-27 07:28:52.302087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-11-27 07:28:52.302116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-11-27 07:28:52.302474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-11-27 07:28:52.302506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-11-27 07:28:52.302869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-11-27 07:28:52.302899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-11-27 07:28:52.303258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-11-27 07:28:52.303290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-11-27 07:28:52.303650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-11-27 07:28:52.303678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-11-27 07:28:52.304041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-11-27 07:28:52.304070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-11-27 07:28:52.304442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-11-27 07:28:52.304471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.308 [2024-11-27 07:28:52.304840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-11-27 07:28:52.304868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-11-27 07:28:52.305115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-11-27 07:28:52.305145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-11-27 07:28:52.305541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-11-27 07:28:52.305577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-11-27 07:28:52.305939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-11-27 07:28:52.305969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-11-27 07:28:52.306326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-11-27 07:28:52.306356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-11-27 07:28:52.306603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-11-27 07:28:52.306635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-11-27 07:28:52.307063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-11-27 07:28:52.307092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-11-27 07:28:52.307347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-11-27 07:28:52.307377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-11-27 07:28:52.307633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-11-27 07:28:52.307664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-11-27 07:28:52.308001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-11-27 07:28:52.308031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-11-27 07:28:52.308398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-11-27 07:28:52.308428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-11-27 07:28:52.308790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-11-27 07:28:52.308819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-11-27 07:28:52.309061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-11-27 07:28:52.309092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-11-27 07:28:52.309444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-11-27 07:28:52.309475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-11-27 07:28:52.309839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-11-27 07:28:52.309869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-11-27 07:28:52.310228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-11-27 07:28:52.310258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-11-27 07:28:52.310637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-11-27 07:28:52.310666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-11-27 07:28:52.311028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-11-27 07:28:52.311056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-11-27 07:28:52.311413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-11-27 07:28:52.311445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-11-27 07:28:52.311662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-11-27 07:28:52.311694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-11-27 07:28:52.312058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-11-27 07:28:52.312089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-11-27 07:28:52.312478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-11-27 07:28:52.312509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-11-27 07:28:52.312815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-11-27 07:28:52.312843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-11-27 07:28:52.313211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-11-27 07:28:52.313241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-11-27 07:28:52.313588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-11-27 07:28:52.313618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-11-27 07:28:52.313980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-11-27 07:28:52.314011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-11-27 07:28:52.314391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-11-27 07:28:52.314420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-11-27 07:28:52.314796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-11-27 07:28:52.314824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-11-27 07:28:52.315187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-11-27 07:28:52.315219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-11-27 07:28:52.315547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-11-27 07:28:52.315582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-11-27 07:28:52.315924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-11-27 07:28:52.315955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-11-27 07:28:52.316198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-11-27 07:28:52.316229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-11-27 07:28:52.316612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-11-27 07:28:52.316641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-11-27 07:28:52.316905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-11-27 07:28:52.316933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-11-27 07:28:52.317299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-11-27 07:28:52.317330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-11-27 07:28:52.317700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-11-27 07:28:52.317730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-11-27 07:28:52.318087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-11-27 07:28:52.318115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-11-27 07:28:52.318463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-11-27 07:28:52.318493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.318846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-11-27 07:28:52.318877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.319143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-11-27 07:28:52.319185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.319542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-11-27 07:28:52.319570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.319939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-11-27 07:28:52.319968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.320316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-11-27 07:28:52.320348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.320706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-11-27 07:28:52.320737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.321145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-11-27 07:28:52.321193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.321567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-11-27 07:28:52.321595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.321957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-11-27 07:28:52.321985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.322351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-11-27 07:28:52.322381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.322747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-11-27 07:28:52.322776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.323123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-11-27 07:28:52.323154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.323521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-11-27 07:28:52.323551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.323821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-11-27 07:28:52.323849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.324101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-11-27 07:28:52.324130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.324529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-11-27 07:28:52.324559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.324906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-11-27 07:28:52.324935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.325299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-11-27 07:28:52.325329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.325705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-11-27 07:28:52.325741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.326100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-11-27 07:28:52.326128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.326518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-11-27 07:28:52.326548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.326916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-11-27 07:28:52.326944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.327212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-11-27 07:28:52.327241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.327614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-11-27 07:28:52.327642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.328019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-11-27 07:28:52.328048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.328412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-11-27 07:28:52.328442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.328696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-11-27 07:28:52.328728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.329102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-11-27 07:28:52.329133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.329507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-11-27 07:28:52.329537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.329897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-11-27 07:28:52.329929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.330295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-11-27 07:28:52.330326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.330704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-11-27 07:28:52.330733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.331074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-11-27 07:28:52.331105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.331522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-11-27 07:28:52.331553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.331895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-11-27 07:28:52.331927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.332274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-11-27 07:28:52.332304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.332673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-11-27 07:28:52.332703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.333049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-11-27 07:28:52.333079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-11-27 07:28:52.333439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-11-27 07:28:52.333469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-11-27 07:28:52.333705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-11-27 07:28:52.333734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-11-27 07:28:52.334086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-11-27 07:28:52.334115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-11-27 07:28:52.334465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-11-27 07:28:52.334497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-11-27 07:28:52.334857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-11-27 07:28:52.334885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-11-27 07:28:52.335260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-11-27 07:28:52.335291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-11-27 07:28:52.335570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-11-27 07:28:52.335599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-11-27 07:28:52.335950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-11-27 07:28:52.335979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-11-27 07:28:52.336340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-11-27 07:28:52.336372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-11-27 07:28:52.336733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-11-27 07:28:52.336763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-11-27 07:28:52.337118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-11-27 07:28:52.337147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-11-27 07:28:52.337610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-11-27 07:28:52.337641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-11-27 07:28:52.337972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-11-27 07:28:52.338002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-11-27 07:28:52.338272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-11-27 07:28:52.338302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-11-27 07:28:52.338688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-11-27 07:28:52.338717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-11-27 07:28:52.339072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-11-27 07:28:52.339103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-11-27 07:28:52.339341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-11-27 07:28:52.339374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-11-27 07:28:52.339721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-11-27 07:28:52.339749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-11-27 07:28:52.340109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-11-27 07:28:52.340138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-11-27 07:28:52.340391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-11-27 07:28:52.340424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-11-27 07:28:52.340802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-11-27 07:28:52.340831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-11-27 07:28:52.341192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-11-27 07:28:52.341223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-11-27 07:28:52.341575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-11-27 07:28:52.341603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-11-27 07:28:52.341965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-11-27 07:28:52.341995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-11-27 07:28:52.342359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-11-27 07:28:52.342389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-11-27 07:28:52.342749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-11-27 07:28:52.342778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-11-27 07:28:52.343089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-11-27 07:28:52.343118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-11-27 07:28:52.343462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-11-27 07:28:52.343492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-11-27 07:28:52.343856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-11-27 07:28:52.343886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-11-27 07:28:52.344288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-11-27 07:28:52.344319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-11-27 07:28:52.344673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-11-27 07:28:52.344704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-11-27 07:28:52.345053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-11-27 07:28:52.345083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-11-27 07:28:52.345432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-11-27 07:28:52.345462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-11-27 07:28:52.345819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-11-27 07:28:52.345849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-11-27 07:28:52.346206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-11-27 07:28:52.346236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-11-27 07:28:52.346612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-11-27 07:28:52.346642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-11-27 07:28:52.346994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-11-27 07:28:52.347023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-11-27 07:28:52.347388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-11-27 07:28:52.347418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-11-27 07:28:52.347768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-11-27 07:28:52.347797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-11-27 07:28:52.348156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-11-27 07:28:52.348207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-11-27 07:28:52.348604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-11-27 07:28:52.348632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-11-27 07:28:52.348977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-11-27 07:28:52.349006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-11-27 07:28:52.349385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-11-27 07:28:52.349416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-11-27 07:28:52.349775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-11-27 07:28:52.349803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-11-27 07:28:52.350177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-11-27 07:28:52.350208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-11-27 07:28:52.350597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-11-27 07:28:52.350627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-11-27 07:28:52.350989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-11-27 07:28:52.351018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-11-27 07:28:52.351398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-11-27 07:28:52.351427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-11-27 07:28:52.351824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-11-27 07:28:52.351859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-11-27 07:28:52.352207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-11-27 07:28:52.352237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-11-27 07:28:52.352602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-11-27 07:28:52.352633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-11-27 07:28:52.352964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-11-27 07:28:52.352994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-11-27 07:28:52.353348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-11-27 07:28:52.353378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-11-27 07:28:52.353645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-11-27 07:28:52.353673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-11-27 07:28:52.354029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-11-27 07:28:52.354058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-11-27 07:28:52.354407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-11-27 07:28:52.354438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-11-27 07:28:52.354796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-11-27 07:28:52.354826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-11-27 07:28:52.355182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-11-27 07:28:52.355214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-11-27 07:28:52.355462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-11-27 07:28:52.355493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-11-27 07:28:52.355842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-11-27 07:28:52.355871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-11-27 07:28:52.356236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-11-27 07:28:52.356268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-11-27 07:28:52.356637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-11-27 07:28:52.356666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-11-27 07:28:52.357023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-11-27 07:28:52.357053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-11-27 07:28:52.357397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-11-27 07:28:52.357428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-11-27 07:28:52.357788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-11-27 07:28:52.357819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-11-27 07:28:52.358156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-11-27 07:28:52.358200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-11-27 07:28:52.358563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-11-27 07:28:52.358592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-11-27 07:28:52.358950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-11-27 07:28:52.358979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-11-27 07:28:52.359230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-11-27 07:28:52.359263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-11-27 07:28:52.359662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-11-27 07:28:52.359691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-11-27 07:28:52.360046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-11-27 07:28:52.360075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-11-27 07:28:52.360415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-11-27 07:28:52.360445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-11-27 07:28:52.360813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.360842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-11-27 07:28:52.361102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.361132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-11-27 07:28:52.361563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.361593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-11-27 07:28:52.361948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.361984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-11-27 07:28:52.362341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.362371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-11-27 07:28:52.362727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.362756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-11-27 07:28:52.363120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.363148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-11-27 07:28:52.363504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.363533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-11-27 07:28:52.363892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.363922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-11-27 07:28:52.364189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.364220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-11-27 07:28:52.364508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.364538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-11-27 07:28:52.364892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.364922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-11-27 07:28:52.365288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.365318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-11-27 07:28:52.365685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.365714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-11-27 07:28:52.366062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.366092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-11-27 07:28:52.366438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.366469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-11-27 07:28:52.366825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.366853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-11-27 07:28:52.367213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.367244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-11-27 07:28:52.367631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.367660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-11-27 07:28:52.368008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.368038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-11-27 07:28:52.368289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.368319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-11-27 07:28:52.368594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.368623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-11-27 07:28:52.368971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.369000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-11-27 07:28:52.369369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.369399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-11-27 07:28:52.369765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.369793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-11-27 07:28:52.370139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.370182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-11-27 07:28:52.370436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.370466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-11-27 07:28:52.370828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.370858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-11-27 07:28:52.371088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.371120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-11-27 07:28:52.371548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.371578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-11-27 07:28:52.371914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.371944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-11-27 07:28:52.372306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.372338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-11-27 07:28:52.372774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.372805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-11-27 07:28:52.373169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.373200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-11-27 07:28:52.373459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.373488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-11-27 07:28:52.373846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.373874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-11-27 07:28:52.374233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.374263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-11-27 07:28:52.374627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.374655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-11-27 07:28:52.375058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-11-27 07:28:52.375088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.375462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-11-27 07:28:52.375492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.375860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-11-27 07:28:52.375889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.376253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-11-27 07:28:52.376283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.376619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-11-27 07:28:52.376648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.377007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-11-27 07:28:52.377036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.377381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-11-27 07:28:52.377411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.377764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-11-27 07:28:52.377793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.378170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-11-27 07:28:52.378199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.378556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-11-27 07:28:52.378585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.378947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-11-27 07:28:52.378975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.379348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-11-27 07:28:52.379379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.379958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-11-27 07:28:52.380002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.380362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-11-27 07:28:52.380399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.380655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-11-27 07:28:52.380684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.381027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-11-27 07:28:52.381057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.381450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-11-27 07:28:52.381481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.381833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-11-27 07:28:52.381863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.382235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-11-27 07:28:52.382264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.382633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-11-27 07:28:52.382662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.383013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-11-27 07:28:52.383043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.383421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-11-27 07:28:52.383450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.383699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-11-27 07:28:52.383732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.384082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-11-27 07:28:52.384112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.384476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-11-27 07:28:52.384507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.384848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-11-27 07:28:52.384878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.385242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-11-27 07:28:52.385273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.385676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-11-27 07:28:52.385704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.386063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-11-27 07:28:52.386093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.386451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-11-27 07:28:52.386482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.386829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-11-27 07:28:52.386859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.387225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-11-27 07:28:52.387255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.387684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-11-27 07:28:52.387712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.387984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-11-27 07:28:52.388019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.388345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-11-27 07:28:52.388375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.388745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-11-27 07:28:52.388773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.389136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-11-27 07:28:52.389178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.389504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-11-27 07:28:52.389534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.389902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-11-27 07:28:52.389930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-11-27 07:28:52.390239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.390269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-11-27 07:28:52.390673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.390704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-11-27 07:28:52.390939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.390971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-11-27 07:28:52.391342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.391373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-11-27 07:28:52.391731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.391760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-11-27 07:28:52.392125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.392153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-11-27 07:28:52.392501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.392529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-11-27 07:28:52.392893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.392923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-11-27 07:28:52.393285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.393316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-11-27 07:28:52.393723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.393752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-11-27 07:28:52.394096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.394125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-11-27 07:28:52.394483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.394513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-11-27 07:28:52.394875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.394904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-11-27 07:28:52.395265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.395297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-11-27 07:28:52.395662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.395691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-11-27 07:28:52.396054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.396083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-11-27 07:28:52.396463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.396492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-11-27 07:28:52.396858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.396887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-11-27 07:28:52.397299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.397330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-11-27 07:28:52.397677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.397708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-11-27 07:28:52.398075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.398103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-11-27 07:28:52.398411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.398448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-11-27 07:28:52.398772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.398801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-11-27 07:28:52.399197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.399229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-11-27 07:28:52.399596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.399625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-11-27 07:28:52.399881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.399911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-11-27 07:28:52.400307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.400338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-11-27 07:28:52.400677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.400706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-11-27 07:28:52.401071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.401099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-11-27 07:28:52.401471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.401501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-11-27 07:28:52.401863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.401891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-11-27 07:28:52.402263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.402293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-11-27 07:28:52.402663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.402692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-11-27 07:28:52.403061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.403089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-11-27 07:28:52.403449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.403481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-11-27 07:28:52.403839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.403869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-11-27 07:28:52.404228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.404259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-11-27 07:28:52.404622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.404651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-11-27 07:28:52.405014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-11-27 07:28:52.405042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.315 [2024-11-27 07:28:52.405401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-11-27 07:28:52.405431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-11-27 07:28:52.405802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-11-27 07:28:52.405831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-11-27 07:28:52.406195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-11-27 07:28:52.406225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-11-27 07:28:52.406597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-11-27 07:28:52.406635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-11-27 07:28:52.406980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-11-27 07:28:52.407009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-11-27 07:28:52.407371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-11-27 07:28:52.407401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-11-27 07:28:52.407765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-11-27 07:28:52.407794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-11-27 07:28:52.408151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-11-27 07:28:52.408196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-11-27 07:28:52.408599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-11-27 07:28:52.408628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-11-27 07:28:52.408999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-11-27 07:28:52.409034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-11-27 07:28:52.409428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-11-27 07:28:52.409459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-11-27 07:28:52.409820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-11-27 07:28:52.409849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-11-27 07:28:52.410229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-11-27 07:28:52.410258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-11-27 07:28:52.410628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-11-27 07:28:52.410668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-11-27 07:28:52.411024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-11-27 07:28:52.411054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-11-27 07:28:52.411383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-11-27 07:28:52.411414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-11-27 07:28:52.411779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-11-27 07:28:52.411808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-11-27 07:28:52.412191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-11-27 07:28:52.412223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-11-27 07:28:52.412584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-11-27 07:28:52.412614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-11-27 07:28:52.412973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-11-27 07:28:52.413004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-11-27 07:28:52.413367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-11-27 07:28:52.413398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-11-27 07:28:52.413772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-11-27 07:28:52.413802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-11-27 07:28:52.414148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-11-27 07:28:52.414189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-11-27 07:28:52.414508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-11-27 07:28:52.414537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-11-27 07:28:52.414918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-11-27 07:28:52.414947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-11-27 07:28:52.415288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-11-27 07:28:52.415318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-11-27 07:28:52.415670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-11-27 07:28:52.415700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-11-27 07:28:52.416001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-11-27 07:28:52.416032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-11-27 07:28:52.416406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-11-27 07:28:52.416436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-11-27 07:28:52.416823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-11-27 07:28:52.416852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-11-27 07:28:52.417204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-11-27 07:28:52.417236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-11-27 07:28:52.417600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-11-27 07:28:52.417629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-11-27 07:28:52.417886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-11-27 07:28:52.417916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-11-27 07:28:52.418275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-11-27 07:28:52.418307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.418672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-11-27 07:28:52.418702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.419071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-11-27 07:28:52.419101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.419467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-11-27 07:28:52.419497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.419880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-11-27 07:28:52.419911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.420285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-11-27 07:28:52.420315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.420698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-11-27 07:28:52.420727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.421091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-11-27 07:28:52.421119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.421463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-11-27 07:28:52.421493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.421866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-11-27 07:28:52.421896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.422250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-11-27 07:28:52.422281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.422546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-11-27 07:28:52.422575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.422970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-11-27 07:28:52.422998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.423276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-11-27 07:28:52.423305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.423700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-11-27 07:28:52.423730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.424080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-11-27 07:28:52.424108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.424481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-11-27 07:28:52.424513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.424885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-11-27 07:28:52.424916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.425290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-11-27 07:28:52.425321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.425685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-11-27 07:28:52.425713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.426076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-11-27 07:28:52.426104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.426485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-11-27 07:28:52.426516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.426875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-11-27 07:28:52.426905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.427284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-11-27 07:28:52.427315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.427692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-11-27 07:28:52.427721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.428081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-11-27 07:28:52.428110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.428552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-11-27 07:28:52.428581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.429027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-11-27 07:28:52.429057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.429403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-11-27 07:28:52.429434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.429809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-11-27 07:28:52.429838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.430191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-11-27 07:28:52.430221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.430569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-11-27 07:28:52.430599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.430962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-11-27 07:28:52.430992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.431335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-11-27 07:28:52.431366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.431723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-11-27 07:28:52.431752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.432116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-11-27 07:28:52.432145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.432518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-11-27 07:28:52.432547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.432905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-11-27 07:28:52.432934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.433297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-11-27 07:28:52.433327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-11-27 07:28:52.433696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.433727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-11-27 07:28:52.434069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.434099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-11-27 07:28:52.434485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.434516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-11-27 07:28:52.434882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.434911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-11-27 07:28:52.435325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.435355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-11-27 07:28:52.435712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.435746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-11-27 07:28:52.436082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.436113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-11-27 07:28:52.436493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.436525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-11-27 07:28:52.436908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.436937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-11-27 07:28:52.437376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.437406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-11-27 07:28:52.437765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.437794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-11-27 07:28:52.438111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.438141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-11-27 07:28:52.438427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.438461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-11-27 07:28:52.438820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.438849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-11-27 07:28:52.439199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.439231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-11-27 07:28:52.439595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.439623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-11-27 07:28:52.439991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.440022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-11-27 07:28:52.440406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.440436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-11-27 07:28:52.440706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.440738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-11-27 07:28:52.441077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.441107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-11-27 07:28:52.441509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.441540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-11-27 07:28:52.441806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.441835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-11-27 07:28:52.442186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.442216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-11-27 07:28:52.442591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.442619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-11-27 07:28:52.442972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.443003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-11-27 07:28:52.443345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.443375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-11-27 07:28:52.443748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.443777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-11-27 07:28:52.444139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.444179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-11-27 07:28:52.444511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.444541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-11-27 07:28:52.444902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.444931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-11-27 07:28:52.445356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.445387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-11-27 07:28:52.445747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.445777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-11-27 07:28:52.446184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.446222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-11-27 07:28:52.446572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.446601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-11-27 07:28:52.446953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.446982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-11-27 07:28:52.447346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.447378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-11-27 07:28:52.447743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.447772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-11-27 07:28:52.448035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.448064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-11-27 07:28:52.448410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-11-27 07:28:52.448440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-11-27 07:28:52.448809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-11-27 07:28:52.448837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-11-27 07:28:52.449200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-11-27 07:28:52.449232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-11-27 07:28:52.449576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-11-27 07:28:52.449605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-11-27 07:28:52.449966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-11-27 07:28:52.449996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-11-27 07:28:52.450363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-11-27 07:28:52.450394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-11-27 07:28:52.450646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-11-27 07:28:52.450679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-11-27 07:28:52.451044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-11-27 07:28:52.451074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-11-27 07:28:52.451442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-11-27 07:28:52.451473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-11-27 07:28:52.451841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-11-27 07:28:52.451873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-11-27 07:28:52.452130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-11-27 07:28:52.452169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-11-27 07:28:52.452489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-11-27 07:28:52.452518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-11-27 07:28:52.452913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-11-27 07:28:52.452943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-11-27 07:28:52.453310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-11-27 07:28:52.453340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-11-27 07:28:52.453704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-11-27 07:28:52.453732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-11-27 07:28:52.454098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-11-27 07:28:52.454127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-11-27 07:28:52.454386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-11-27 07:28:52.454420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-11-27 07:28:52.454789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-11-27 07:28:52.454817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-11-27 07:28:52.455203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-11-27 07:28:52.455234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-11-27 07:28:52.455591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-11-27 07:28:52.455619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-11-27 07:28:52.455983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-11-27 07:28:52.456015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-11-27 07:28:52.456350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-11-27 07:28:52.456381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-11-27 07:28:52.456749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-11-27 07:28:52.456780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-11-27 07:28:52.457139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-11-27 07:28:52.457182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-11-27 07:28:52.457482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-11-27 07:28:52.457510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-11-27 07:28:52.457865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-11-27 07:28:52.457894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-11-27 07:28:52.458241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-11-27 07:28:52.458272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-11-27 07:28:52.458647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-11-27 07:28:52.458678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-11-27 07:28:52.459041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-11-27 07:28:52.459070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-11-27 07:28:52.459435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-11-27 07:28:52.459466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-11-27 07:28:52.459814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-11-27 07:28:52.459845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-11-27 07:28:52.460279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-11-27 07:28:52.460308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-11-27 07:28:52.460657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-11-27 07:28:52.460688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-11-27 07:28:52.461040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-11-27 07:28:52.461070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-11-27 07:28:52.461406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-11-27 07:28:52.461436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-11-27 07:28:52.461849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-11-27 07:28:52.461879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-11-27 07:28:52.462234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-11-27 07:28:52.462265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-11-27 07:28:52.462614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-11-27 07:28:52.462644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-11-27 07:28:52.462886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.462919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-11-27 07:28:52.463315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.463346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-11-27 07:28:52.463742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.463771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-11-27 07:28:52.464133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.464174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-11-27 07:28:52.464521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.464550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-11-27 07:28:52.464919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.464949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-11-27 07:28:52.465320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.465350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-11-27 07:28:52.465721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.465750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-11-27 07:28:52.466088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.466118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-11-27 07:28:52.466457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.466486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-11-27 07:28:52.466846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.466876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-11-27 07:28:52.467237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.467269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-11-27 07:28:52.467605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.467635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-11-27 07:28:52.467998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.468027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-11-27 07:28:52.468402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.468432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-11-27 07:28:52.468786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.468815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-11-27 07:28:52.469168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.469199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-11-27 07:28:52.469565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.469595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-11-27 07:28:52.469960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.469989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-11-27 07:28:52.470253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.470283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-11-27 07:28:52.470640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.470670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-11-27 07:28:52.471021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.471051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-11-27 07:28:52.471420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.471451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-11-27 07:28:52.471783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.471813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-11-27 07:28:52.472186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.472223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-11-27 07:28:52.472580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.472608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-11-27 07:28:52.472975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.473005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-11-27 07:28:52.473374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.473404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-11-27 07:28:52.473752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.473783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-11-27 07:28:52.474148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.474193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-11-27 07:28:52.474560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.474590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-11-27 07:28:52.474957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.474987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-11-27 07:28:52.475348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.475379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-11-27 07:28:52.475740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.475771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-11-27 07:28:52.476127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.476170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-11-27 07:28:52.476569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.476597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-11-27 07:28:52.476955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.476986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-11-27 07:28:52.477282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.477316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-11-27 07:28:52.477705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-11-27 07:28:52.477735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-11-27 07:28:52.478113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-11-27 07:28:52.478143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-11-27 07:28:52.478519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-11-27 07:28:52.478552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-11-27 07:28:52.478926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-11-27 07:28:52.478957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-11-27 07:28:52.479224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-11-27 07:28:52.479256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-11-27 07:28:52.479626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-11-27 07:28:52.479655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-11-27 07:28:52.480022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-11-27 07:28:52.480052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-11-27 07:28:52.480508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-11-27 07:28:52.480541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-11-27 07:28:52.480902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-11-27 07:28:52.480933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-11-27 07:28:52.481298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-11-27 07:28:52.481329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-11-27 07:28:52.481689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-11-27 07:28:52.481718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-11-27 07:28:52.482078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-11-27 07:28:52.482108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-11-27 07:28:52.482477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-11-27 07:28:52.482507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-11-27 07:28:52.482869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-11-27 07:28:52.482905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-11-27 07:28:52.483276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-11-27 07:28:52.483309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-11-27 07:28:52.483679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-11-27 07:28:52.483709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-11-27 07:28:52.484064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-11-27 07:28:52.484094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-11-27 07:28:52.484505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-11-27 07:28:52.484536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-11-27 07:28:52.484789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-11-27 07:28:52.484819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-11-27 07:28:52.485198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-11-27 07:28:52.485229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-11-27 07:28:52.485593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-11-27 07:28:52.485624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-11-27 07:28:52.485994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-11-27 07:28:52.486024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-11-27 07:28:52.486386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-11-27 07:28:52.486417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-11-27 07:28:52.486774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-11-27 07:28:52.486804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-11-27 07:28:52.487182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-11-27 07:28:52.487213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-11-27 07:28:52.487482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-11-27 07:28:52.487512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-11-27 07:28:52.487911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-11-27 07:28:52.487940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-11-27 07:28:52.488276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-11-27 07:28:52.488310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-11-27 07:28:52.488659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-11-27 07:28:52.488689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-11-27 07:28:52.489052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-11-27 07:28:52.489082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-11-27 07:28:52.489442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-11-27 07:28:52.489473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-11-27 07:28:52.489820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-11-27 07:28:52.489850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-11-27 07:28:52.490207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-11-27 07:28:52.490239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-11-27 07:28:52.490578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-11-27 07:28:52.490607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-11-27 07:28:52.490962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.321 [2024-11-27 07:28:52.490991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.321 qpair failed and we were unable to recover it. 00:33:41.321 [2024-11-27 07:28:52.491354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.321 [2024-11-27 07:28:52.491384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.321 qpair failed and we were unable to recover it. 00:33:41.321 [2024-11-27 07:28:52.491739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.321 [2024-11-27 07:28:52.491767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.321 qpair failed and we were unable to recover it. 00:33:41.321 [2024-11-27 07:28:52.492030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.321 [2024-11-27 07:28:52.492060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.321 qpair failed and we were unable to recover it. 00:33:41.321 [2024-11-27 07:28:52.492453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-11-27 07:28:52.492486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-11-27 07:28:52.492845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-11-27 07:28:52.492877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-11-27 07:28:52.493248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-11-27 07:28:52.493285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-11-27 07:28:52.493650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-11-27 07:28:52.493680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-11-27 07:28:52.494048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-11-27 07:28:52.494078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-11-27 07:28:52.494413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-11-27 07:28:52.494446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-11-27 07:28:52.494793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-11-27 07:28:52.494823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-11-27 07:28:52.495182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-11-27 07:28:52.495215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-11-27 07:28:52.495562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-11-27 07:28:52.495591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-11-27 07:28:52.495955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-11-27 07:28:52.495984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-11-27 07:28:52.496250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-11-27 07:28:52.496282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-11-27 07:28:52.496649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-11-27 07:28:52.496680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-11-27 07:28:52.497045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-11-27 07:28:52.497073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-11-27 07:28:52.497442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-11-27 07:28:52.497474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-11-27 07:28:52.497818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-11-27 07:28:52.497846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-11-27 07:28:52.498212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-11-27 07:28:52.498244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-11-27 07:28:52.498625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-11-27 07:28:52.498656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-11-27 07:28:52.499003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-11-27 07:28:52.499033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-11-27 07:28:52.499386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-11-27 07:28:52.499418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-11-27 07:28:52.499786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-11-27 07:28:52.499815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-11-27 07:28:52.500182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-11-27 07:28:52.500213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-11-27 07:28:52.500624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-11-27 07:28:52.500653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-11-27 07:28:52.501022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-11-27 07:28:52.501050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-11-27 07:28:52.501409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-11-27 07:28:52.501442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-11-27 07:28:52.501799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-11-27 07:28:52.501830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-11-27 07:28:52.502195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-11-27 07:28:52.502225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-11-27 07:28:52.502582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-11-27 07:28:52.502612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-11-27 07:28:52.502978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-11-27 07:28:52.503007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-11-27 07:28:52.503263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-11-27 07:28:52.503292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-11-27 07:28:52.503659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-11-27 07:28:52.503689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-11-27 07:28:52.504108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-11-27 07:28:52.504138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-11-27 07:28:52.504564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-11-27 07:28:52.504593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-11-27 07:28:52.504942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-11-27 07:28:52.504974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-11-27 07:28:52.505435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-11-27 07:28:52.505466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-11-27 07:28:52.505758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-11-27 07:28:52.505788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-11-27 07:28:52.506142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-11-27 07:28:52.506184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-11-27 07:28:52.506519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-11-27 07:28:52.506549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-11-27 07:28:52.506925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-11-27 07:28:52.506954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-11-27 07:28:52.507294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-11-27 07:28:52.507325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-11-27 07:28:52.507696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-11-27 07:28:52.507728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-11-27 07:28:52.508099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-11-27 07:28:52.508130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-11-27 07:28:52.508501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-11-27 07:28:52.508532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-11-27 07:28:52.508887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-11-27 07:28:52.508917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-11-27 07:28:52.509305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-11-27 07:28:52.509343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-11-27 07:28:52.509710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-11-27 07:28:52.509741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-11-27 07:28:52.510179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-11-27 07:28:52.510210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-11-27 07:28:52.510607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-11-27 07:28:52.510637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-11-27 07:28:52.511008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-11-27 07:28:52.511038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-11-27 07:28:52.511419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-11-27 07:28:52.511448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-11-27 07:28:52.511805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-11-27 07:28:52.511835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-11-27 07:28:52.512196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-11-27 07:28:52.512227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-11-27 07:28:52.512619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-11-27 07:28:52.512648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-11-27 07:28:52.513009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-11-27 07:28:52.513039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-11-27 07:28:52.513418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-11-27 07:28:52.513450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-11-27 07:28:52.513798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-11-27 07:28:52.513828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-11-27 07:28:52.514200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-11-27 07:28:52.514231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-11-27 07:28:52.514615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-11-27 07:28:52.514645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-11-27 07:28:52.515012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-11-27 07:28:52.515043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-11-27 07:28:52.515400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-11-27 07:28:52.515431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-11-27 07:28:52.515792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-11-27 07:28:52.515821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-11-27 07:28:52.516180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-11-27 07:28:52.516213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-11-27 07:28:52.516591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-11-27 07:28:52.516622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-11-27 07:28:52.516987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-11-27 07:28:52.517018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-11-27 07:28:52.517378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-11-27 07:28:52.517411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-11-27 07:28:52.517745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-11-27 07:28:52.517774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-11-27 07:28:52.518039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-11-27 07:28:52.518068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-11-27 07:28:52.518420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-11-27 07:28:52.518449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-11-27 07:28:52.518685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-11-27 07:28:52.518717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-11-27 07:28:52.519056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-11-27 07:28:52.519084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-11-27 07:28:52.519438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-11-27 07:28:52.519468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-11-27 07:28:52.519832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-11-27 07:28:52.519869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-11-27 07:28:52.520230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-11-27 07:28:52.520261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-11-27 07:28:52.520618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-11-27 07:28:52.520654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-11-27 07:28:52.521025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-11-27 07:28:52.521055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-11-27 07:28:52.521429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-11-27 07:28:52.521460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-11-27 07:28:52.521818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-11-27 07:28:52.521850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-11-27 07:28:52.522219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-11-27 07:28:52.522250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-11-27 07:28:52.522598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-11-27 07:28:52.522627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-11-27 07:28:52.523005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-11-27 07:28:52.523033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-11-27 07:28:52.523280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-11-27 07:28:52.523312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-11-27 07:28:52.523663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-11-27 07:28:52.523695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-11-27 07:28:52.524052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-11-27 07:28:52.524083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-11-27 07:28:52.524457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-11-27 07:28:52.524486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-11-27 07:28:52.524846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-11-27 07:28:52.524874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-11-27 07:28:52.525251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-11-27 07:28:52.525283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-11-27 07:28:52.525653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-11-27 07:28:52.525682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-11-27 07:28:52.525941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-11-27 07:28:52.525972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-11-27 07:28:52.526344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-11-27 07:28:52.526375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.597 [2024-11-27 07:28:52.526731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-11-27 07:28:52.526761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-11-27 07:28:52.527103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-11-27 07:28:52.527132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-11-27 07:28:52.527512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-11-27 07:28:52.527543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-11-27 07:28:52.527910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-11-27 07:28:52.527938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-11-27 07:28:52.528298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-11-27 07:28:52.528329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-11-27 07:28:52.528718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-11-27 07:28:52.528746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-11-27 07:28:52.529120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-11-27 07:28:52.529152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-11-27 07:28:52.529535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-11-27 07:28:52.529567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-11-27 07:28:52.529926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-11-27 07:28:52.529959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-11-27 07:28:52.530317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-11-27 07:28:52.530356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-11-27 07:28:52.530697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-11-27 07:28:52.530728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-11-27 07:28:52.531077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-11-27 07:28:52.531109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-11-27 07:28:52.531480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-11-27 07:28:52.531511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-11-27 07:28:52.531875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-11-27 07:28:52.531906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-11-27 07:28:52.532276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-11-27 07:28:52.532306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-11-27 07:28:52.532661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-11-27 07:28:52.532691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-11-27 07:28:52.533051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-11-27 07:28:52.533083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-11-27 07:28:52.533452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-11-27 07:28:52.533483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-11-27 07:28:52.533828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-11-27 07:28:52.533858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-11-27 07:28:52.534196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-11-27 07:28:52.534227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-11-27 07:28:52.534614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-11-27 07:28:52.534644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-11-27 07:28:52.534998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-11-27 07:28:52.535026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-11-27 07:28:52.535403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-11-27 07:28:52.535434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-11-27 07:28:52.535790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-11-27 07:28:52.535820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-11-27 07:28:52.536195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-11-27 07:28:52.536228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-11-27 07:28:52.536583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-11-27 07:28:52.536616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-11-27 07:28:52.536975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-11-27 07:28:52.537006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-11-27 07:28:52.537368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-11-27 07:28:52.537398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-11-27 07:28:52.537649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-11-27 07:28:52.537684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.598 [2024-11-27 07:28:52.538039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-11-27 07:28:52.538068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-11-27 07:28:52.538409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-11-27 07:28:52.538439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-11-27 07:28:52.538808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-11-27 07:28:52.538838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-11-27 07:28:52.539198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-11-27 07:28:52.539228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-11-27 07:28:52.539593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-11-27 07:28:52.539622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-11-27 07:28:52.540051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-11-27 07:28:52.540079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-11-27 07:28:52.540467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-11-27 07:28:52.540497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-11-27 07:28:52.540744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-11-27 07:28:52.540776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-11-27 07:28:52.541074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-11-27 07:28:52.541103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-11-27 07:28:52.541481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-11-27 07:28:52.541513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-11-27 07:28:52.541746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-11-27 07:28:52.541775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-11-27 07:28:52.542121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-11-27 07:28:52.542150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-11-27 07:28:52.542533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-11-27 07:28:52.542562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-11-27 07:28:52.542920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-11-27 07:28:52.542948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-11-27 07:28:52.543302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-11-27 07:28:52.543332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-11-27 07:28:52.543711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-11-27 07:28:52.543740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-11-27 07:28:52.544107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-11-27 07:28:52.544136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-11-27 07:28:52.544481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-11-27 07:28:52.544510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-11-27 07:28:52.544892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-11-27 07:28:52.544921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-11-27 07:28:52.545300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-11-27 07:28:52.545330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-11-27 07:28:52.545706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-11-27 07:28:52.545735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-11-27 07:28:52.546093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-11-27 07:28:52.546123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-11-27 07:28:52.546486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-11-27 07:28:52.546515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-11-27 07:28:52.546869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-11-27 07:28:52.546897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-11-27 07:28:52.547257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-11-27 07:28:52.547288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-11-27 07:28:52.547635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-11-27 07:28:52.547665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-11-27 07:28:52.548026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-11-27 07:28:52.548056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-11-27 07:28:52.548414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-11-27 07:28:52.548443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-11-27 07:28:52.548701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-11-27 07:28:52.548729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-11-27 07:28:52.549078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-11-27 07:28:52.549108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-11-27 07:28:52.549487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-11-27 07:28:52.549517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-11-27 07:28:52.549874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-11-27 07:28:52.549903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-11-27 07:28:52.550144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-11-27 07:28:52.550189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-11-27 07:28:52.550548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-11-27 07:28:52.550576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-11-27 07:28:52.550941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-11-27 07:28:52.550970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-11-27 07:28:52.551336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-11-27 07:28:52.551366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-11-27 07:28:52.551735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-11-27 07:28:52.551763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-11-27 07:28:52.552131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-11-27 07:28:52.552189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-11-27 07:28:52.552549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-11-27 07:28:52.552577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-11-27 07:28:52.553015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-11-27 07:28:52.553044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-11-27 07:28:52.553402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-11-27 07:28:52.553433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-11-27 07:28:52.553801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-11-27 07:28:52.553829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-11-27 07:28:52.554129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-11-27 07:28:52.554166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-11-27 07:28:52.554527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-11-27 07:28:52.554557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-11-27 07:28:52.554916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-11-27 07:28:52.554944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-11-27 07:28:52.555309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-11-27 07:28:52.555339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-11-27 07:28:52.555685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-11-27 07:28:52.555714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-11-27 07:28:52.555957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-11-27 07:28:52.555988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-11-27 07:28:52.556337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-11-27 07:28:52.556374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-11-27 07:28:52.556628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-11-27 07:28:52.556657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-11-27 07:28:52.557081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-11-27 07:28:52.557111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-11-27 07:28:52.557485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-11-27 07:28:52.557514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-11-27 07:28:52.557868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-11-27 07:28:52.557898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-11-27 07:28:52.558262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-11-27 07:28:52.558292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-11-27 07:28:52.558635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-11-27 07:28:52.558666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-11-27 07:28:52.559028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-11-27 07:28:52.559057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-11-27 07:28:52.559411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-11-27 07:28:52.559442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-11-27 07:28:52.559798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-11-27 07:28:52.559828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-11-27 07:28:52.560191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-11-27 07:28:52.560221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-11-27 07:28:52.560581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-11-27 07:28:52.560610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-11-27 07:28:52.560960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-11-27 07:28:52.560988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-11-27 07:28:52.561361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-11-27 07:28:52.561391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-11-27 07:28:52.561752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-11-27 07:28:52.561781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-11-27 07:28:52.562137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-11-27 07:28:52.562175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-11-27 07:28:52.562540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-11-27 07:28:52.562569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-11-27 07:28:52.562938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-11-27 07:28:52.562966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-11-27 07:28:52.563254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-11-27 07:28:52.563284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-11-27 07:28:52.563564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-11-27 07:28:52.563593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-11-27 07:28:52.563945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-11-27 07:28:52.563975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-11-27 07:28:52.564336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-11-27 07:28:52.564367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-11-27 07:28:52.564725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-11-27 07:28:52.564753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-11-27 07:28:52.565105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-11-27 07:28:52.565133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-11-27 07:28:52.565508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-11-27 07:28:52.565538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-11-27 07:28:52.565908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-11-27 07:28:52.565937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-11-27 07:28:52.566280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-11-27 07:28:52.566310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-11-27 07:28:52.566672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-11-27 07:28:52.566707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-11-27 07:28:52.567060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-11-27 07:28:52.567087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-11-27 07:28:52.567442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-11-27 07:28:52.567472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-11-27 07:28:52.567870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-11-27 07:28:52.567900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-11-27 07:28:52.568256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-11-27 07:28:52.568287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-11-27 07:28:52.568675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-11-27 07:28:52.568704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-11-27 07:28:52.569065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-11-27 07:28:52.569094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-11-27 07:28:52.569427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-11-27 07:28:52.569458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-11-27 07:28:52.569700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-11-27 07:28:52.569729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-11-27 07:28:52.570067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-11-27 07:28:52.570098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-11-27 07:28:52.570453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-11-27 07:28:52.570483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-11-27 07:28:52.570853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-11-27 07:28:52.570883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-11-27 07:28:52.571233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-11-27 07:28:52.571263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-11-27 07:28:52.571680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-11-27 07:28:52.571709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-11-27 07:28:52.572069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-11-27 07:28:52.572098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-11-27 07:28:52.572454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-11-27 07:28:52.572485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-11-27 07:28:52.572845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-11-27 07:28:52.572874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-11-27 07:28:52.573244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-11-27 07:28:52.573274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-11-27 07:28:52.573655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-11-27 07:28:52.573684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-11-27 07:28:52.574054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-11-27 07:28:52.574083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-11-27 07:28:52.574356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-11-27 07:28:52.574385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-11-27 07:28:52.574729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-11-27 07:28:52.574758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-11-27 07:28:52.575111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-11-27 07:28:52.575141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-11-27 07:28:52.575522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-11-27 07:28:52.575551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-11-27 07:28:52.575917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-11-27 07:28:52.575946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-11-27 07:28:52.576297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-11-27 07:28:52.576328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-11-27 07:28:52.576581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-11-27 07:28:52.576613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-11-27 07:28:52.576982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-11-27 07:28:52.577017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-11-27 07:28:52.577266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-11-27 07:28:52.577295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-11-27 07:28:52.577669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-11-27 07:28:52.577698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-11-27 07:28:52.578065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-11-27 07:28:52.578094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-11-27 07:28:52.578444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-11-27 07:28:52.578476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-11-27 07:28:52.578823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-11-27 07:28:52.578852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-11-27 07:28:52.579223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-11-27 07:28:52.579253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-11-27 07:28:52.579620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-11-27 07:28:52.579648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.602 [2024-11-27 07:28:52.580014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-11-27 07:28:52.580044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-11-27 07:28:52.580393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-11-27 07:28:52.580423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-11-27 07:28:52.580774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-11-27 07:28:52.580802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-11-27 07:28:52.581168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-11-27 07:28:52.581199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-11-27 07:28:52.581519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-11-27 07:28:52.581547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-11-27 07:28:52.581797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-11-27 07:28:52.581829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-11-27 07:28:52.582194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-11-27 07:28:52.582225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-11-27 07:28:52.582581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-11-27 07:28:52.582611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-11-27 07:28:52.583015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-11-27 07:28:52.583044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-11-27 07:28:52.583391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-11-27 07:28:52.583421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-11-27 07:28:52.583776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-11-27 07:28:52.583804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-11-27 07:28:52.584188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-11-27 07:28:52.584219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-11-27 07:28:52.584564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-11-27 07:28:52.584594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-11-27 07:28:52.584961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-11-27 07:28:52.584990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-11-27 07:28:52.585365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-11-27 07:28:52.585396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-11-27 07:28:52.585750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-11-27 07:28:52.585778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-11-27 07:28:52.586144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-11-27 07:28:52.586182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-11-27 07:28:52.586534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-11-27 07:28:52.586562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-11-27 07:28:52.586861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-11-27 07:28:52.586890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-11-27 07:28:52.587257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-11-27 07:28:52.587286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-11-27 07:28:52.587520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-11-27 07:28:52.587548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-11-27 07:28:52.587791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-11-27 07:28:52.587823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-11-27 07:28:52.588200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-11-27 07:28:52.588231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-11-27 07:28:52.588602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-11-27 07:28:52.588632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-11-27 07:28:52.588997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-11-27 07:28:52.589025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-11-27 07:28:52.589391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-11-27 07:28:52.589420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-11-27 07:28:52.589773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-11-27 07:28:52.589802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-11-27 07:28:52.590167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-11-27 07:28:52.590197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-11-27 07:28:52.590527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-11-27 07:28:52.590556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.603 [2024-11-27 07:28:52.590916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-11-27 07:28:52.590945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-11-27 07:28:52.591305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-11-27 07:28:52.591334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-11-27 07:28:52.591690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-11-27 07:28:52.591719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-11-27 07:28:52.592094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-11-27 07:28:52.592123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-11-27 07:28:52.592505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-11-27 07:28:52.592541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-11-27 07:28:52.592900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-11-27 07:28:52.592929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-11-27 07:28:52.593302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-11-27 07:28:52.593332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-11-27 07:28:52.593684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-11-27 07:28:52.593713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-11-27 07:28:52.594073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-11-27 07:28:52.594102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-11-27 07:28:52.594476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-11-27 07:28:52.594506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-11-27 07:28:52.594879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-11-27 07:28:52.594908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-11-27 07:28:52.595281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-11-27 07:28:52.595312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-11-27 07:28:52.595679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-11-27 07:28:52.595707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-11-27 07:28:52.596069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-11-27 07:28:52.596098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-11-27 07:28:52.596475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-11-27 07:28:52.596506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-11-27 07:28:52.596868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-11-27 07:28:52.596898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-11-27 07:28:52.597247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-11-27 07:28:52.597277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-11-27 07:28:52.597644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-11-27 07:28:52.597673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-11-27 07:28:52.598038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-11-27 07:28:52.598067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-11-27 07:28:52.598414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-11-27 07:28:52.598445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-11-27 07:28:52.598806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-11-27 07:28:52.598835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-11-27 07:28:52.599199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-11-27 07:28:52.599229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-11-27 07:28:52.599518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-11-27 07:28:52.599547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-11-27 07:28:52.599929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-11-27 07:28:52.599958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-11-27 07:28:52.600319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-11-27 07:28:52.600350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-11-27 07:28:52.600595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-11-27 07:28:52.600624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-11-27 07:28:52.600977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-11-27 07:28:52.601006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-11-27 07:28:52.601370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-11-27 07:28:52.601400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-11-27 07:28:52.601755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-11-27 07:28:52.601784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-11-27 07:28:52.602144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-11-27 07:28:52.602185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.602531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.602560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.602855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.602889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.603223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.603255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.603500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.603532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.603884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.603913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.604311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.604341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.604688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.604717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.605078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.605107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.605543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.605574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.605927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.605956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.606314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.606344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.606722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.606751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.607115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.607144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.607516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.607545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.607912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.607941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.608316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.608347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.608710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.608746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.609099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.609127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.609380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.609412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.609665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.609694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.609965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.609994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.610342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.610373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.610753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.610782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.611134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.611173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.611523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.611552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.611913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.611942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.612293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.612323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.612735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.612764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.613121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.613156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.613395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.613425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.613802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.613831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.614189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.614220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.614570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.614598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.614945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.614975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.615397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.615427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.615783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.615812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.616177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.616208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.616565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.616594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.616958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.616986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.617431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.617461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.617795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.617825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-11-27 07:28:52.618187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-11-27 07:28:52.618218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.618578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.618607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.618966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.618995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.619272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.619303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.619672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.619700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.620053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.620083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.620419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.620450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.620877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.620906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.621168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.621200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.621573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.621603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.621959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.621987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.622367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.622399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.622754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.622783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.623153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.623191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.623564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.623592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.623968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.623997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.624393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.624424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.624772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.624802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.625178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.625208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.625565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.625594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.625956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.625985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.626347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.626376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.626750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.626779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.627154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.627193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.627554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.627583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.627950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.627978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.628346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.628376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.628737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.628765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.629126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.629156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.629544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.629573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.629931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.629960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.630312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.630343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.630520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.630548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.630975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.631005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.631463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.631493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.631837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.631865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.632231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.632261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.632610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.632639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.632997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.633025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.633402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.633433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.633808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.633837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.634204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.634234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.634574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.634603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.634861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.634890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.635247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.635278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.635648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.635677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.636035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.636064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.636436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.636467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.636813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.636842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.637195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-11-27 07:28:52.637225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-11-27 07:28:52.637584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.637612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.637969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.637998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.638371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.638401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.638772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.638802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.639198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.639228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.639578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.639613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.639965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.639994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.640364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.640395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.640755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.640789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.641130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.641186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.641475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.641506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.641739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.641770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.642099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.642129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.642541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.642572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.642809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.642839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.643197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.643229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.643615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.643645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.644017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.644046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.644397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.644429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.644790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.644820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.645154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.645197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.645480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.645510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.645765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.645797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.646181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.646212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.646570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.646600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.646959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.646988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.647227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.647257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.647625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.647654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.647984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.648014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.648400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.648432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.648827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.648857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.649227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.649278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.649658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.649693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.650040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.650068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.650404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.650434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.650743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.650772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.651121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.651152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.651400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.651432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.651774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.651805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.652170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.652202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.652567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.652596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.652970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.653000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.653250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.653282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.653667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.653697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.654049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.654078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.654424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.654456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.654828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.654859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.655215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.655245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.655621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.655650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.656028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.656057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.656410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.656441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.656797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.656826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.657203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.657234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.657615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.657645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.658012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.658041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.658398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.658428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.658796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.658825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.659195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.659226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.659554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-11-27 07:28:52.659583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-11-27 07:28:52.659937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.659978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.660323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.660356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.660727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.660757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.661129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.661178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.661520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.661550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.661922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.661951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.662311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.662341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.662696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.662725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.663070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.663101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.663524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.663555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.663896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.663924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.664283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.664313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.664566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.664595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.664949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.664977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.665310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.665341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.665702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.665732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.666138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.666186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.666420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.666449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.666789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.666818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.667157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.667203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.667568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.667598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.667936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.667966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.668337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.668368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.668727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.668757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.669116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.669145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.669529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.669559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.669909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.669938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.670299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.670331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.670708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.670738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.671088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.671117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.671462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.671492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.671818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.671848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.672108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.672141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.672507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.672538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.672911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.672943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.673288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.673319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.673680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.673710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.674079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.674109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.674525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.674556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.674898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.674929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.675291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.675322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.675692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.675729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.676017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.676047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.676414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.676444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.676790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.676821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.677137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.677182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.677496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.677524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.677902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.677932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.678195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.678227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.678585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.678616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.678983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.679013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.679333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.679363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.679734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.679764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.680134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.680175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.680538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.680569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.680936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.680967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.681305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.681344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.681714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.681744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.682120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-11-27 07:28:52.682150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-11-27 07:28:52.682495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.682526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.682895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.682925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.683280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.683312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.683655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.683685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.684032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.684062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.684410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.684441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.684802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.684831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.685205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.685236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.685600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.685630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.685871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.685907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.686252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.686284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.686637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.686667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.686993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.687023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.687276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.687309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.687695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.687725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.688067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.688098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.688458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.688489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.688859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.688888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.689235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.689265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.689588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.689616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.689996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.690025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.690409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.690441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.690785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.690815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.691196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.691227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.691687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.691716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.692077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.692108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.692494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.692524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.692896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.692925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.693284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.693314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.693673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.693702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.694076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.694104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.694554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.694585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.694940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.694971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.695337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.695367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.695749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.695778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.696141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.696179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.696546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.696582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.696923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.696953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.697420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.697455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.697819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.697849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.698245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.698300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.698673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.698703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.699062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.699090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.699463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.699493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.699851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.699880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.700242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.700271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.700650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.700680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.701018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.701047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.701404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.701434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-11-27 07:28:52.701784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-11-27 07:28:52.701813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.702146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.702188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.702538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.702567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.702946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.702975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.703339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.703369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.703726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.703755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.704116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.704145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.704518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.704549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.704921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.704950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.705312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.705342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.705722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.705750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.706120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.706149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.706568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.706598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.706951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.706981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.707323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.707353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.707728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.707758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.708121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.708150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.708519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.708548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.708922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.708952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.709302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.709332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.709707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.709735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.710099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.710128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.710447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.710477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.710847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.710876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.711239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.711268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.712096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.712128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.712399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.712431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.712815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.712844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.713154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.713196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.713569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.713599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.713961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.713991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.714329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.714360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.714717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.714747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.715102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.715131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.715495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.715525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.715868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.715903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.716255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.716287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 wit/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2585844 Killed "${NVMF_APP[@]}" "$@" 00:33:41.609 h addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.716664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.716693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.717052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.717081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 07:28:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:33:41.609 [2024-11-27 07:28:52.717431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.717461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 07:28:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:41.609 [2024-11-27 07:28:52.717831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.717867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 07:28:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 07:28:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:41.609 [2024-11-27 07:28:52.718218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.718248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 07:28:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:41.609 [2024-11-27 07:28:52.718580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.718610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.718973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.719003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.719398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.719428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.719768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.719799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.720154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.720197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.720542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.720570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.720952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.720981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.721333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.721363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.721728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.721761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.722128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.722170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.722520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.722550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.722887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.722917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.723287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.723320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.723697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-11-27 07:28:52.723727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-11-27 07:28:52.724068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.724097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.724379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.724410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.724749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.724778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.725105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.725135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.725529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.725561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.725914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.725945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.726282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.726313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.726677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.726708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 07:28:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2586878 00:33:41.610 [2024-11-27 07:28:52.727065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.727097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 07:28:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2586878 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.727351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.727392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 07:28:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:41.610 07:28:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2586878 ']' 00:33:41.610 [2024-11-27 07:28:52.727742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.727774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 07:28:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 07:28:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:41.610 [2024-11-27 07:28:52.728126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.728171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 07:28:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:41.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:41.610 07:28:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:41.610 [2024-11-27 07:28:52.728526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.728559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 07:28:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:41.610 [2024-11-27 07:28:52.728898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.728930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.729301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.729338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.729699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.729729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.730083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.730113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.730520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.730550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.730908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.730936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.731287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.731319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.731561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.731591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.731949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.731978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.732357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.732390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.732738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.732771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.733135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.733181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.733549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.733579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.733872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.733900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.734138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.734187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.734553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.734585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.734922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.734952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.735313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.735345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.735709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.735738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.736088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.736130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.736533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.736565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.736929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.736958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.737187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.737218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.737553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.737587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.737960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.737990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.738380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.738412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.738758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.738789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.739035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.739065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.739511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.739543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.739827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.739858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.740226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.740257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.740531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.740565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.740822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.740855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.741244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.741277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.741675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.741705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.742043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.742073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.742423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.742454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.742855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.742885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.743244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.743277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.743660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.743688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.743926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.743958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.744350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.744380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.744732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-11-27 07:28:52.744761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-11-27 07:28:52.744906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.744943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.745236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.745267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.745615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.745647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.746004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.746034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.746432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.746462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.746810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.746839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.747239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.747272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.747630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.747659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.748019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.748050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.748443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.748477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.748809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.748839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.749107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.749137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.749590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.749623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.750029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.750060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.750410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.750442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.750690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.750720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.750971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.751000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.751426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.751457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.751835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.751866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.752204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.752233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.752628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.752656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.753039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.753068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.753340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.753371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.753642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.753671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.753955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.753985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.754243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.754275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.754508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.754540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.754788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.754823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.755217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.755249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.755630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.755661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.756041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.756072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.756435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.756467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.756831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.756861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.757239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.757271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.757643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.757673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.758070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.758099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.758544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.758575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.758945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.758978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.759317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.759349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.759575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.759605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.759854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.759885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.760139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.760182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.760465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.760497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.760861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.760890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.761243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.761284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.761666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.761697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.762083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.762113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.762467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.762499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.762847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.762875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.763240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.763271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.763619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.763652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.763983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.764011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.764335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.764364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-11-27 07:28:52.764715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-11-27 07:28:52.764745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.765124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.765153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.765516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.765545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.765908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.765940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.766249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.766281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.766559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.766593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.766926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.766965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.767303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.767335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.767720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.767749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.768114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.768144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.768516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.768546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.768914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.768945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.769298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.769331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.769664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.769695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.770071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.770101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.770349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.770379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.770647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.770677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.771024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.771056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.771297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.771335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.771645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.771674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.771927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.771957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.772308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.772339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.772716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.772747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.773120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.773150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.773545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.773578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.773832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.773863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.774226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.774259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.774644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.774674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.775035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.775068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.775366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.775397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.775714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.775748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.776042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.776073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.776462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.776495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.776853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.776885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.777262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.777293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.777664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.777696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.778068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.778098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.778485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.778515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.778887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.778917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.779279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.779310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.779660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.779690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.780076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.780106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.780474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.780505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.780774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.780803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.781191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.781222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.781558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.781596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.781944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.781975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.782343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.782374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.782740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.782770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.782842] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:33:41.612 [2024-11-27 07:28:52.782914] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:41.612 [2024-11-27 07:28:52.783135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.783185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.783449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.783481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.783749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.783778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.784122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.784152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.784525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.784556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.784904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.784935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-11-27 07:28:52.785278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-11-27 07:28:52.785309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.885 [2024-11-27 07:28:52.785543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.885 [2024-11-27 07:28:52.785575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.885 qpair failed and we were unable to recover it. 00:33:41.885 [2024-11-27 07:28:52.785930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.885 [2024-11-27 07:28:52.785964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.885 qpair failed and we were unable to recover it. 00:33:41.885 [2024-11-27 07:28:52.786342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.885 [2024-11-27 07:28:52.786376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.885 qpair failed and we were unable to recover it. 00:33:41.885 [2024-11-27 07:28:52.786739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.885 [2024-11-27 07:28:52.786771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.885 qpair failed and we were unable to recover it. 00:33:41.885 [2024-11-27 07:28:52.787126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.885 [2024-11-27 07:28:52.787175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.885 qpair failed and we were unable to recover it. 00:33:41.885 [2024-11-27 07:28:52.787524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.885 [2024-11-27 07:28:52.787558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.885 qpair failed and we were unable to recover it. 00:33:41.885 [2024-11-27 07:28:52.787929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.885 [2024-11-27 07:28:52.787964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.885 qpair failed and we were unable to recover it. 00:33:41.885 [2024-11-27 07:28:52.788316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.885 [2024-11-27 07:28:52.788346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.885 qpair failed and we were unable to recover it. 00:33:41.885 [2024-11-27 07:28:52.788725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.885 [2024-11-27 07:28:52.788756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.885 qpair failed and we were unable to recover it. 00:33:41.885 [2024-11-27 07:28:52.789103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.885 [2024-11-27 07:28:52.789133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.885 qpair failed and we were unable to recover it. 00:33:41.885 [2024-11-27 07:28:52.789508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.885 [2024-11-27 07:28:52.789540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.885 qpair failed and we were unable to recover it. 00:33:41.885 [2024-11-27 07:28:52.789885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.885 [2024-11-27 07:28:52.789915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.885 qpair failed and we were unable to recover it. 00:33:41.885 [2024-11-27 07:28:52.790283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.885 [2024-11-27 07:28:52.790315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.886 qpair failed and we were unable to recover it. 00:33:41.886 [2024-11-27 07:28:52.790683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.886 [2024-11-27 07:28:52.790713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.886 qpair failed and we were unable to recover it. 00:33:41.886 [2024-11-27 07:28:52.790964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.886 [2024-11-27 07:28:52.790995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.886 qpair failed and we were unable to recover it. 00:33:41.886 [2024-11-27 07:28:52.791372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.886 [2024-11-27 07:28:52.791409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.886 qpair failed and we were unable to recover it. 00:33:41.886 [2024-11-27 07:28:52.791757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.886 [2024-11-27 07:28:52.791787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.886 qpair failed and we were unable to recover it. 00:33:41.886 [2024-11-27 07:28:52.792146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.886 [2024-11-27 07:28:52.792200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.886 qpair failed and we were unable to recover it. 00:33:41.886 [2024-11-27 07:28:52.792565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.886 [2024-11-27 07:28:52.792596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.886 qpair failed and we were unable to recover it. 00:33:41.886 [2024-11-27 07:28:52.792945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.886 [2024-11-27 07:28:52.792975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.886 qpair failed and we were unable to recover it. 00:33:41.886 [2024-11-27 07:28:52.793326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.886 [2024-11-27 07:28:52.793358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.886 qpair failed and we were unable to recover it. 00:33:41.886 [2024-11-27 07:28:52.793594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.886 [2024-11-27 07:28:52.793625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.886 qpair failed and we were unable to recover it. 00:33:41.886 [2024-11-27 07:28:52.793877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.886 [2024-11-27 07:28:52.793910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.886 qpair failed and we were unable to recover it. 00:33:41.886 [2024-11-27 07:28:52.794263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.886 [2024-11-27 07:28:52.794295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.886 qpair failed and we were unable to recover it. 00:33:41.886 [2024-11-27 07:28:52.794677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.886 [2024-11-27 07:28:52.794708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.886 qpair failed and we were unable to recover it. 00:33:41.886 [2024-11-27 07:28:52.795056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.886 [2024-11-27 07:28:52.795087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.886 qpair failed and we were unable to recover it. 00:33:41.886 [2024-11-27 07:28:52.795433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.886 [2024-11-27 07:28:52.795465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.886 qpair failed and we were unable to recover it. 00:33:41.886 [2024-11-27 07:28:52.795809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.886 [2024-11-27 07:28:52.795840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.886 qpair failed and we were unable to recover it. 00:33:41.886 [2024-11-27 07:28:52.796197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.886 [2024-11-27 07:28:52.796229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.886 qpair failed and we were unable to recover it. 00:33:41.886 [2024-11-27 07:28:52.796606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.886 [2024-11-27 07:28:52.796637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.886 qpair failed and we were unable to recover it. 00:33:41.886 [2024-11-27 07:28:52.797000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.886 [2024-11-27 07:28:52.797030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.886 qpair failed and we were unable to recover it. 00:33:41.886 [2024-11-27 07:28:52.797421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.886 [2024-11-27 07:28:52.797451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.886 qpair failed and we were unable to recover it. 00:33:41.886 [2024-11-27 07:28:52.797702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.886 [2024-11-27 07:28:52.797732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.886 qpair failed and we were unable to recover it. 00:33:41.886 [2024-11-27 07:28:52.798075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.886 [2024-11-27 07:28:52.798105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.886 qpair failed and we were unable to recover it. 00:33:41.886 [2024-11-27 07:28:52.798352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.886 [2024-11-27 07:28:52.798383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.886 qpair failed and we were unable to recover it. 00:33:41.886 [2024-11-27 07:28:52.798755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.886 [2024-11-27 07:28:52.798785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.886 qpair failed and we were unable to recover it. 00:33:41.886 [2024-11-27 07:28:52.799145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.886 [2024-11-27 07:28:52.799189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.886 qpair failed and we were unable to recover it. 00:33:41.886 [2024-11-27 07:28:52.799569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.886 [2024-11-27 07:28:52.799598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.886 qpair failed and we were unable to recover it. 00:33:41.886 [2024-11-27 07:28:52.799835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.886 [2024-11-27 07:28:52.799863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.886 qpair failed and we were unable to recover it. 00:33:41.886 [2024-11-27 07:28:52.800218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.886 [2024-11-27 07:28:52.800249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.886 qpair failed and we were unable to recover it. 00:33:41.886 [2024-11-27 07:28:52.800613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.886 [2024-11-27 07:28:52.800643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.886 qpair failed and we were unable to recover it. 00:33:41.886 [2024-11-27 07:28:52.801002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.886 [2024-11-27 07:28:52.801033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.886 qpair failed and we were unable to recover it. 00:33:41.886 [2024-11-27 07:28:52.801380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.886 [2024-11-27 07:28:52.801410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.886 qpair failed and we were unable to recover it. 00:33:41.886 [2024-11-27 07:28:52.801771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.886 [2024-11-27 07:28:52.801801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.886 qpair failed and we were unable to recover it. 00:33:41.886 [2024-11-27 07:28:52.802182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.886 [2024-11-27 07:28:52.802214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.886 qpair failed and we were unable to recover it. 00:33:41.886 [2024-11-27 07:28:52.802567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.886 [2024-11-27 07:28:52.802597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.886 qpair failed and we were unable to recover it. 00:33:41.886 [2024-11-27 07:28:52.802963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.886 [2024-11-27 07:28:52.802992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.886 qpair failed and we were unable to recover it. 00:33:41.886 [2024-11-27 07:28:52.803264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.887 [2024-11-27 07:28:52.803295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.887 qpair failed and we were unable to recover it. 00:33:41.887 [2024-11-27 07:28:52.803651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.887 [2024-11-27 07:28:52.803680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.887 qpair failed and we were unable to recover it. 00:33:41.887 [2024-11-27 07:28:52.803983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.887 [2024-11-27 07:28:52.804012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.887 qpair failed and we were unable to recover it. 00:33:41.887 [2024-11-27 07:28:52.804379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.887 [2024-11-27 07:28:52.804410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.887 qpair failed and we were unable to recover it. 00:33:41.887 [2024-11-27 07:28:52.804683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.887 [2024-11-27 07:28:52.804711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.887 qpair failed and we were unable to recover it. 00:33:41.887 [2024-11-27 07:28:52.805044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.887 [2024-11-27 07:28:52.805073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.887 qpair failed and we were unable to recover it. 00:33:41.887 [2024-11-27 07:28:52.805414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.887 [2024-11-27 07:28:52.805444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.887 qpair failed and we were unable to recover it. 00:33:41.887 [2024-11-27 07:28:52.805817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.887 [2024-11-27 07:28:52.805847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.887 qpair failed and we were unable to recover it. 00:33:41.887 [2024-11-27 07:28:52.806208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.887 [2024-11-27 07:28:52.806239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.887 qpair failed and we were unable to recover it. 00:33:41.887 [2024-11-27 07:28:52.806721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.887 [2024-11-27 07:28:52.806752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.887 qpair failed and we were unable to recover it. 00:33:41.887 [2024-11-27 07:28:52.807104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.887 [2024-11-27 07:28:52.807134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.887 qpair failed and we were unable to recover it. 00:33:41.887 [2024-11-27 07:28:52.807568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.887 [2024-11-27 07:28:52.807598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.887 qpair failed and we were unable to recover it. 00:33:41.887 [2024-11-27 07:28:52.807964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.887 [2024-11-27 07:28:52.807993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.887 qpair failed and we were unable to recover it. 00:33:41.887 [2024-11-27 07:28:52.808381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.887 [2024-11-27 07:28:52.808413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.887 qpair failed and we were unable to recover it. 00:33:41.887 [2024-11-27 07:28:52.808776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.887 [2024-11-27 07:28:52.808805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.887 qpair failed and we were unable to recover it. 00:33:41.887 [2024-11-27 07:28:52.809048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.887 [2024-11-27 07:28:52.809080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.887 qpair failed and we were unable to recover it. 00:33:41.887 [2024-11-27 07:28:52.809430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.887 [2024-11-27 07:28:52.809460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.887 qpair failed and we were unable to recover it. 00:33:41.887 [2024-11-27 07:28:52.809849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.887 [2024-11-27 07:28:52.809878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.887 qpair failed and we were unable to recover it. 00:33:41.887 [2024-11-27 07:28:52.810105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.887 [2024-11-27 07:28:52.810134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.887 qpair failed and we were unable to recover it. 00:33:41.887 [2024-11-27 07:28:52.810504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.887 [2024-11-27 07:28:52.810535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.887 qpair failed and we were unable to recover it. 00:33:41.887 [2024-11-27 07:28:52.810882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.887 [2024-11-27 07:28:52.810913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.887 qpair failed and we were unable to recover it. 00:33:41.887 [2024-11-27 07:28:52.811279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.887 [2024-11-27 07:28:52.811311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.887 qpair failed and we were unable to recover it. 00:33:41.887 [2024-11-27 07:28:52.811654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.887 [2024-11-27 07:28:52.811684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.887 qpair failed and we were unable to recover it. 00:33:41.887 [2024-11-27 07:28:52.812046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.887 [2024-11-27 07:28:52.812076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.887 qpair failed and we were unable to recover it. 00:33:41.887 [2024-11-27 07:28:52.812314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.887 [2024-11-27 07:28:52.812345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.887 qpair failed and we were unable to recover it. 00:33:41.887 [2024-11-27 07:28:52.812698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.887 [2024-11-27 07:28:52.812727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.887 qpair failed and we were unable to recover it. 00:33:41.887 [2024-11-27 07:28:52.813099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.887 [2024-11-27 07:28:52.813129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.887 qpair failed and we were unable to recover it. 00:33:41.887 [2024-11-27 07:28:52.813338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.887 [2024-11-27 07:28:52.813369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.887 qpair failed and we were unable to recover it. 00:33:41.887 [2024-11-27 07:28:52.813749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.887 [2024-11-27 07:28:52.813779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.887 qpair failed and we were unable to recover it. 00:33:41.887 [2024-11-27 07:28:52.814151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.887 [2024-11-27 07:28:52.814191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.887 qpair failed and we were unable to recover it. 00:33:41.887 [2024-11-27 07:28:52.814540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.887 [2024-11-27 07:28:52.814571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.887 qpair failed and we were unable to recover it. 00:33:41.887 [2024-11-27 07:28:52.814933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.887 [2024-11-27 07:28:52.814963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.887 qpair failed and we were unable to recover it. 00:33:41.887 [2024-11-27 07:28:52.815320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.887 [2024-11-27 07:28:52.815353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.887 qpair failed and we were unable to recover it. 00:33:41.887 [2024-11-27 07:28:52.815730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.888 [2024-11-27 07:28:52.815766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.888 qpair failed and we were unable to recover it. 00:33:41.888 [2024-11-27 07:28:52.816106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.888 [2024-11-27 07:28:52.816136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.888 qpair failed and we were unable to recover it. 00:33:41.888 [2024-11-27 07:28:52.816381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.888 [2024-11-27 07:28:52.816414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.888 qpair failed and we were unable to recover it. 00:33:41.888 [2024-11-27 07:28:52.816801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.888 [2024-11-27 07:28:52.816841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.888 qpair failed and we were unable to recover it. 00:33:41.888 [2024-11-27 07:28:52.817179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.888 [2024-11-27 07:28:52.817211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.888 qpair failed and we were unable to recover it. 00:33:41.888 [2024-11-27 07:28:52.817574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.888 [2024-11-27 07:28:52.817605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.888 qpair failed and we were unable to recover it. 00:33:41.888 [2024-11-27 07:28:52.817837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.888 [2024-11-27 07:28:52.817866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.888 qpair failed and we were unable to recover it. 00:33:41.888 [2024-11-27 07:28:52.818267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.888 [2024-11-27 07:28:52.818299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.888 qpair failed and we were unable to recover it. 00:33:41.888 [2024-11-27 07:28:52.818661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.888 [2024-11-27 07:28:52.818690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.888 qpair failed and we were unable to recover it. 00:33:41.888 [2024-11-27 07:28:52.819046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.888 [2024-11-27 07:28:52.819075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.888 qpair failed and we were unable to recover it. 00:33:41.888 [2024-11-27 07:28:52.819415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.888 [2024-11-27 07:28:52.819445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.888 qpair failed and we were unable to recover it. 00:33:41.888 [2024-11-27 07:28:52.819760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.888 [2024-11-27 07:28:52.819789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.888 qpair failed and we were unable to recover it. 00:33:41.888 [2024-11-27 07:28:52.820154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.888 [2024-11-27 07:28:52.820193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.888 qpair failed and we were unable to recover it. 00:33:41.888 [2024-11-27 07:28:52.820553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.888 [2024-11-27 07:28:52.820582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.888 qpair failed and we were unable to recover it. 00:33:41.888 [2024-11-27 07:28:52.820945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.888 [2024-11-27 07:28:52.820975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.888 qpair failed and we were unable to recover it. 00:33:41.888 [2024-11-27 07:28:52.821337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.888 [2024-11-27 07:28:52.821367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.888 qpair failed and we were unable to recover it. 00:33:41.888 [2024-11-27 07:28:52.821729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.888 [2024-11-27 07:28:52.821757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.888 qpair failed and we were unable to recover it. 00:33:41.888 [2024-11-27 07:28:52.822124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.888 [2024-11-27 07:28:52.822153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.888 qpair failed and we were unable to recover it. 00:33:41.888 [2024-11-27 07:28:52.822423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.888 [2024-11-27 07:28:52.822452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.888 qpair failed and we were unable to recover it. 00:33:41.888 [2024-11-27 07:28:52.822789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.888 [2024-11-27 07:28:52.822818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.888 qpair failed and we were unable to recover it. 00:33:41.888 [2024-11-27 07:28:52.823190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.888 [2024-11-27 07:28:52.823220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.888 qpair failed and we were unable to recover it. 00:33:41.888 [2024-11-27 07:28:52.823544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.888 [2024-11-27 07:28:52.823574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.888 qpair failed and we were unable to recover it. 00:33:41.888 [2024-11-27 07:28:52.823821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.888 [2024-11-27 07:28:52.823850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.888 qpair failed and we were unable to recover it. 00:33:41.888 [2024-11-27 07:28:52.824154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.888 [2024-11-27 07:28:52.824205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.888 qpair failed and we were unable to recover it. 00:33:41.888 [2024-11-27 07:28:52.824542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.888 [2024-11-27 07:28:52.824572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.888 qpair failed and we were unable to recover it. 00:33:41.888 [2024-11-27 07:28:52.824975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.888 [2024-11-27 07:28:52.825003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.888 qpair failed and we were unable to recover it. 00:33:41.888 [2024-11-27 07:28:52.825373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.888 [2024-11-27 07:28:52.825405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.888 qpair failed and we were unable to recover it. 00:33:41.888 [2024-11-27 07:28:52.825744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.888 [2024-11-27 07:28:52.825773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.888 qpair failed and we were unable to recover it. 00:33:41.889 [2024-11-27 07:28:52.826142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.889 [2024-11-27 07:28:52.826181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.889 qpair failed and we were unable to recover it. 00:33:41.889 [2024-11-27 07:28:52.826433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.889 [2024-11-27 07:28:52.826462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.889 qpair failed and we were unable to recover it. 00:33:41.889 [2024-11-27 07:28:52.826808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.889 [2024-11-27 07:28:52.826844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.889 qpair failed and we were unable to recover it. 00:33:41.889 [2024-11-27 07:28:52.827200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.889 [2024-11-27 07:28:52.827230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.889 qpair failed and we were unable to recover it. 00:33:41.889 [2024-11-27 07:28:52.827612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.889 [2024-11-27 07:28:52.827641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.889 qpair failed and we were unable to recover it. 00:33:41.889 [2024-11-27 07:28:52.828053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.889 [2024-11-27 07:28:52.828081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.889 qpair failed and we were unable to recover it. 00:33:41.889 [2024-11-27 07:28:52.828409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.889 [2024-11-27 07:28:52.828440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.889 qpair failed and we were unable to recover it. 00:33:41.889 [2024-11-27 07:28:52.828683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.889 [2024-11-27 07:28:52.828711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.889 qpair failed and we were unable to recover it. 00:33:41.889 [2024-11-27 07:28:52.829075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.889 [2024-11-27 07:28:52.829104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.889 qpair failed and we were unable to recover it. 00:33:41.889 [2024-11-27 07:28:52.829445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.889 [2024-11-27 07:28:52.829474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.889 qpair failed and we were unable to recover it. 00:33:41.889 [2024-11-27 07:28:52.829819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.889 [2024-11-27 07:28:52.829848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.889 qpair failed and we were unable to recover it. 00:33:41.889 [2024-11-27 07:28:52.830214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.889 [2024-11-27 07:28:52.830244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.889 qpair failed and we were unable to recover it. 00:33:41.889 [2024-11-27 07:28:52.830635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.889 [2024-11-27 07:28:52.830662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.889 qpair failed and we were unable to recover it. 00:33:41.889 [2024-11-27 07:28:52.831035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.889 [2024-11-27 07:28:52.831064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.889 qpair failed and we were unable to recover it. 00:33:41.889 [2024-11-27 07:28:52.831445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.889 [2024-11-27 07:28:52.831476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.889 qpair failed and we were unable to recover it. 00:33:41.889 [2024-11-27 07:28:52.831826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.889 [2024-11-27 07:28:52.831856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.889 qpair failed and we were unable to recover it. 00:33:41.889 [2024-11-27 07:28:52.832235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.889 [2024-11-27 07:28:52.832271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.889 qpair failed and we were unable to recover it. 00:33:41.889 [2024-11-27 07:28:52.832639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.889 [2024-11-27 07:28:52.832669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.889 qpair failed and we were unable to recover it. 00:33:41.889 [2024-11-27 07:28:52.833026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.889 [2024-11-27 07:28:52.833057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.889 qpair failed and we were unable to recover it. 00:33:41.889 [2024-11-27 07:28:52.833290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.889 [2024-11-27 07:28:52.833319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.889 qpair failed and we were unable to recover it. 00:33:41.889 [2024-11-27 07:28:52.833690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.889 [2024-11-27 07:28:52.833719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.889 qpair failed and we were unable to recover it. 00:33:41.889 [2024-11-27 07:28:52.834026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.889 [2024-11-27 07:28:52.834057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.889 qpair failed and we were unable to recover it. 00:33:41.889 [2024-11-27 07:28:52.834420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.889 [2024-11-27 07:28:52.834450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.889 qpair failed and we were unable to recover it. 00:33:41.889 [2024-11-27 07:28:52.834834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.889 [2024-11-27 07:28:52.834862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.889 qpair failed and we were unable to recover it. 00:33:41.889 [2024-11-27 07:28:52.835209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.889 [2024-11-27 07:28:52.835240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.889 qpair failed and we were unable to recover it. 00:33:41.889 [2024-11-27 07:28:52.835576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.889 [2024-11-27 07:28:52.835606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.889 qpair failed and we were unable to recover it. 00:33:41.889 [2024-11-27 07:28:52.835940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.889 [2024-11-27 07:28:52.835971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.889 qpair failed and we were unable to recover it. 00:33:41.889 [2024-11-27 07:28:52.836352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.889 [2024-11-27 07:28:52.836383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.889 qpair failed and we were unable to recover it. 00:33:41.889 [2024-11-27 07:28:52.836628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.889 [2024-11-27 07:28:52.836661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.889 qpair failed and we were unable to recover it. 00:33:41.889 [2024-11-27 07:28:52.837023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.889 [2024-11-27 07:28:52.837059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.889 qpair failed and we were unable to recover it. 00:33:41.889 [2024-11-27 07:28:52.837401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.889 [2024-11-27 07:28:52.837433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.889 qpair failed and we were unable to recover it. 00:33:41.889 [2024-11-27 07:28:52.837788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.889 [2024-11-27 07:28:52.837817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.889 qpair failed and we were unable to recover it. 00:33:41.889 [2024-11-27 07:28:52.838181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.890 [2024-11-27 07:28:52.838211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.890 qpair failed and we were unable to recover it. 00:33:41.890 [2024-11-27 07:28:52.838587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.890 [2024-11-27 07:28:52.838617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.890 qpair failed and we were unable to recover it. 00:33:41.890 [2024-11-27 07:28:52.838998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.890 [2024-11-27 07:28:52.839028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.890 qpair failed and we were unable to recover it. 00:33:41.890 [2024-11-27 07:28:52.839387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.890 [2024-11-27 07:28:52.839421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.890 qpair failed and we were unable to recover it. 00:33:41.890 [2024-11-27 07:28:52.839798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.890 [2024-11-27 07:28:52.839827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.890 qpair failed and we were unable to recover it. 00:33:41.890 [2024-11-27 07:28:52.840206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.890 [2024-11-27 07:28:52.840236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.890 qpair failed and we were unable to recover it. 00:33:41.890 [2024-11-27 07:28:52.840568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.890 [2024-11-27 07:28:52.840597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.890 qpair failed and we were unable to recover it. 00:33:41.890 [2024-11-27 07:28:52.840968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.890 [2024-11-27 07:28:52.840997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.890 qpair failed and we were unable to recover it. 00:33:41.890 [2024-11-27 07:28:52.841380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.890 [2024-11-27 07:28:52.841410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.890 qpair failed and we were unable to recover it. 00:33:41.890 [2024-11-27 07:28:52.841764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.890 [2024-11-27 07:28:52.841794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.890 qpair failed and we were unable to recover it. 00:33:41.890 [2024-11-27 07:28:52.842171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.890 [2024-11-27 07:28:52.842201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.890 qpair failed and we were unable to recover it. 00:33:41.890 [2024-11-27 07:28:52.842568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.890 [2024-11-27 07:28:52.842597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.890 qpair failed and we were unable to recover it. 00:33:41.890 [2024-11-27 07:28:52.842951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.890 [2024-11-27 07:28:52.842981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.890 qpair failed and we were unable to recover it. 00:33:41.890 [2024-11-27 07:28:52.843316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.890 [2024-11-27 07:28:52.843349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.890 qpair failed and we were unable to recover it. 00:33:41.890 [2024-11-27 07:28:52.843694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.890 [2024-11-27 07:28:52.843723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.890 qpair failed and we were unable to recover it. 00:33:41.890 [2024-11-27 07:28:52.844011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.890 [2024-11-27 07:28:52.844041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.890 qpair failed and we were unable to recover it. 00:33:41.890 [2024-11-27 07:28:52.844417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.890 [2024-11-27 07:28:52.844448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.890 qpair failed and we were unable to recover it. 00:33:41.890 [2024-11-27 07:28:52.844796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.890 [2024-11-27 07:28:52.844825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.890 qpair failed and we were unable to recover it. 00:33:41.890 [2024-11-27 07:28:52.845196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.890 [2024-11-27 07:28:52.845226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.890 qpair failed and we were unable to recover it. 00:33:41.890 [2024-11-27 07:28:52.845568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.890 [2024-11-27 07:28:52.845598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.890 qpair failed and we were unable to recover it. 00:33:41.890 [2024-11-27 07:28:52.845971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.890 [2024-11-27 07:28:52.846000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.890 qpair failed and we were unable to recover it. 00:33:41.890 [2024-11-27 07:28:52.846309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.890 [2024-11-27 07:28:52.846339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.890 qpair failed and we were unable to recover it. 00:33:41.890 [2024-11-27 07:28:52.846696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.890 [2024-11-27 07:28:52.846725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.890 qpair failed and we were unable to recover it. 00:33:41.890 [2024-11-27 07:28:52.847086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.890 [2024-11-27 07:28:52.847114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.890 qpair failed and we were unable to recover it. 00:33:41.890 [2024-11-27 07:28:52.847510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.890 [2024-11-27 07:28:52.847541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.890 qpair failed and we were unable to recover it. 00:33:41.890 [2024-11-27 07:28:52.847956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.890 [2024-11-27 07:28:52.847985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.890 qpair failed and we were unable to recover it. 00:33:41.890 [2024-11-27 07:28:52.848365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.890 [2024-11-27 07:28:52.848395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.890 qpair failed and we were unable to recover it. 00:33:41.890 [2024-11-27 07:28:52.848827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.890 [2024-11-27 07:28:52.848856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.890 qpair failed and we were unable to recover it. 00:33:41.890 [2024-11-27 07:28:52.849221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.890 [2024-11-27 07:28:52.849250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.890 qpair failed and we were unable to recover it. 00:33:41.890 [2024-11-27 07:28:52.849597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.890 [2024-11-27 07:28:52.849627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.890 qpair failed and we were unable to recover it. 00:33:41.890 [2024-11-27 07:28:52.849990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.890 [2024-11-27 07:28:52.850018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.890 qpair failed and we were unable to recover it. 00:33:41.890 [2024-11-27 07:28:52.850386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.890 [2024-11-27 07:28:52.850418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.890 qpair failed and we were unable to recover it. 00:33:41.891 [2024-11-27 07:28:52.850779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.891 [2024-11-27 07:28:52.850809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.891 qpair failed and we were unable to recover it. 00:33:41.891 [2024-11-27 07:28:52.851067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.891 [2024-11-27 07:28:52.851100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.891 qpair failed and we were unable to recover it. 00:33:41.891 [2024-11-27 07:28:52.851515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.891 [2024-11-27 07:28:52.851546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.891 qpair failed and we were unable to recover it. 00:33:41.891 [2024-11-27 07:28:52.851889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.891 [2024-11-27 07:28:52.851919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.891 qpair failed and we were unable to recover it. 00:33:41.891 [2024-11-27 07:28:52.852275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.891 [2024-11-27 07:28:52.852305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.891 qpair failed and we were unable to recover it. 00:33:41.891 [2024-11-27 07:28:52.852650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.891 [2024-11-27 07:28:52.852680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.891 qpair failed and we were unable to recover it. 00:33:41.891 [2024-11-27 07:28:52.853067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.891 [2024-11-27 07:28:52.853098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.891 qpair failed and we were unable to recover it. 00:33:41.891 [2024-11-27 07:28:52.853327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.891 [2024-11-27 07:28:52.853357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.891 qpair failed and we were unable to recover it. 00:33:41.891 [2024-11-27 07:28:52.853683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.891 [2024-11-27 07:28:52.853713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.891 qpair failed and we were unable to recover it. 00:33:41.891 [2024-11-27 07:28:52.854055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.891 [2024-11-27 07:28:52.854084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.891 qpair failed and we were unable to recover it. 00:33:41.891 [2024-11-27 07:28:52.854412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.891 [2024-11-27 07:28:52.854443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.891 qpair failed and we were unable to recover it. 00:33:41.891 [2024-11-27 07:28:52.854812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.891 [2024-11-27 07:28:52.854841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.891 qpair failed and we were unable to recover it. 00:33:41.891 [2024-11-27 07:28:52.855203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.891 [2024-11-27 07:28:52.855233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.891 qpair failed and we were unable to recover it. 00:33:41.891 [2024-11-27 07:28:52.855603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.891 [2024-11-27 07:28:52.855634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.891 qpair failed and we were unable to recover it. 00:33:41.891 [2024-11-27 07:28:52.856003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.891 [2024-11-27 07:28:52.856032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.891 qpair failed and we were unable to recover it. 00:33:41.891 [2024-11-27 07:28:52.856398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.891 [2024-11-27 07:28:52.856428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.891 qpair failed and we were unable to recover it. 00:33:41.891 [2024-11-27 07:28:52.856803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.891 [2024-11-27 07:28:52.856832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.891 qpair failed and we were unable to recover it. 00:33:41.891 [2024-11-27 07:28:52.857089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.891 [2024-11-27 07:28:52.857122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.891 qpair failed and we were unable to recover it. 00:33:41.891 [2024-11-27 07:28:52.857411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.891 [2024-11-27 07:28:52.857441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.891 qpair failed and we were unable to recover it. 00:33:41.891 [2024-11-27 07:28:52.857823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.891 [2024-11-27 07:28:52.857852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.891 qpair failed and we were unable to recover it. 00:33:41.891 [2024-11-27 07:28:52.858239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.891 [2024-11-27 07:28:52.858270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.891 qpair failed and we were unable to recover it. 00:33:41.891 [2024-11-27 07:28:52.858528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.891 [2024-11-27 07:28:52.858556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.891 qpair failed and we were unable to recover it. 00:33:41.891 [2024-11-27 07:28:52.858917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.891 [2024-11-27 07:28:52.858946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.891 qpair failed and we were unable to recover it. 00:33:41.891 [2024-11-27 07:28:52.859314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.891 [2024-11-27 07:28:52.859345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.891 qpair failed and we were unable to recover it. 00:33:41.891 [2024-11-27 07:28:52.859696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.891 [2024-11-27 07:28:52.859724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.891 qpair failed and we were unable to recover it. 00:33:41.891 [2024-11-27 07:28:52.859978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.891 [2024-11-27 07:28:52.860007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.891 qpair failed and we were unable to recover it. 00:33:41.891 [2024-11-27 07:28:52.860386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.891 [2024-11-27 07:28:52.860417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.891 qpair failed and we were unable to recover it. 00:33:41.891 [2024-11-27 07:28:52.860776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.891 [2024-11-27 07:28:52.860805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.891 qpair failed and we were unable to recover it. 00:33:41.891 [2024-11-27 07:28:52.861172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.891 [2024-11-27 07:28:52.861203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.891 qpair failed and we were unable to recover it. 00:33:41.891 [2024-11-27 07:28:52.861558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.891 [2024-11-27 07:28:52.861587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.891 qpair failed and we were unable to recover it. 00:33:41.891 [2024-11-27 07:28:52.861912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.891 [2024-11-27 07:28:52.861941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.891 qpair failed and we were unable to recover it. 00:33:41.891 [2024-11-27 07:28:52.862284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.891 [2024-11-27 07:28:52.862315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.891 qpair failed and we were unable to recover it. 00:33:41.891 [2024-11-27 07:28:52.862717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.891 [2024-11-27 07:28:52.862746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.891 qpair failed and we were unable to recover it. 00:33:41.891 [2024-11-27 07:28:52.863117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.891 [2024-11-27 07:28:52.863153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.891 qpair failed and we were unable to recover it. 00:33:41.891 [2024-11-27 07:28:52.863495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.891 [2024-11-27 07:28:52.863525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.891 qpair failed and we were unable to recover it. 00:33:41.891 [2024-11-27 07:28:52.863881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.892 [2024-11-27 07:28:52.863910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.892 qpair failed and we were unable to recover it. 00:33:41.892 [2024-11-27 07:28:52.864125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.892 [2024-11-27 07:28:52.864155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.892 qpair failed and we were unable to recover it. 00:33:41.892 [2024-11-27 07:28:52.864514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.892 [2024-11-27 07:28:52.864545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.892 qpair failed and we were unable to recover it. 00:33:41.892 [2024-11-27 07:28:52.864888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.892 [2024-11-27 07:28:52.864917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.892 qpair failed and we were unable to recover it. 00:33:41.892 [2024-11-27 07:28:52.865276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.892 [2024-11-27 07:28:52.865307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.892 qpair failed and we were unable to recover it. 00:33:41.892 [2024-11-27 07:28:52.865669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.892 [2024-11-27 07:28:52.865698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.892 qpair failed and we were unable to recover it. 00:33:41.892 [2024-11-27 07:28:52.866000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.892 [2024-11-27 07:28:52.866027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.892 qpair failed and we were unable to recover it. 00:33:41.892 [2024-11-27 07:28:52.866418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.892 [2024-11-27 07:28:52.866448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.892 qpair failed and we were unable to recover it. 00:33:41.892 [2024-11-27 07:28:52.866807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.892 [2024-11-27 07:28:52.866836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.892 qpair failed and we were unable to recover it. 00:33:41.892 [2024-11-27 07:28:52.867193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.892 [2024-11-27 07:28:52.867222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.892 qpair failed and we were unable to recover it. 00:33:41.892 [2024-11-27 07:28:52.867557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.892 [2024-11-27 07:28:52.867585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.892 qpair failed and we were unable to recover it. 00:33:41.892 [2024-11-27 07:28:52.867798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.892 [2024-11-27 07:28:52.867828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.892 qpair failed and we were unable to recover it. 00:33:41.892 [2024-11-27 07:28:52.868253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.892 [2024-11-27 07:28:52.868282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.892 qpair failed and we were unable to recover it. 00:33:41.892 [2024-11-27 07:28:52.868637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.892 [2024-11-27 07:28:52.868666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.892 qpair failed and we were unable to recover it. 00:33:41.892 [2024-11-27 07:28:52.869023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.892 [2024-11-27 07:28:52.869052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.892 qpair failed and we were unable to recover it. 00:33:41.892 [2024-11-27 07:28:52.869416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.892 [2024-11-27 07:28:52.869446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.892 qpair failed and we were unable to recover it. 00:33:41.892 [2024-11-27 07:28:52.869799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.892 [2024-11-27 07:28:52.869828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.892 qpair failed and we were unable to recover it. 00:33:41.892 [2024-11-27 07:28:52.870208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.892 [2024-11-27 07:28:52.870237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.892 qpair failed and we were unable to recover it. 00:33:41.892 [2024-11-27 07:28:52.870565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.892 [2024-11-27 07:28:52.870593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.892 qpair failed and we were unable to recover it. 00:33:41.892 [2024-11-27 07:28:52.870951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.892 [2024-11-27 07:28:52.870980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.892 qpair failed and we were unable to recover it. 00:33:41.892 [2024-11-27 07:28:52.871339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.892 [2024-11-27 07:28:52.871368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.892 qpair failed and we were unable to recover it. 00:33:41.892 [2024-11-27 07:28:52.871728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.892 [2024-11-27 07:28:52.871757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.892 qpair failed and we were unable to recover it. 00:33:41.892 [2024-11-27 07:28:52.872117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.892 [2024-11-27 07:28:52.872146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.892 qpair failed and we were unable to recover it. 00:33:41.892 [2024-11-27 07:28:52.872532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.892 [2024-11-27 07:28:52.872562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.892 qpair failed and we were unable to recover it. 00:33:41.892 [2024-11-27 07:28:52.872921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.892 [2024-11-27 07:28:52.872950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.892 qpair failed and we were unable to recover it. 00:33:41.892 [2024-11-27 07:28:52.873149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:41.892 [2024-11-27 07:28:52.873304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.892 [2024-11-27 07:28:52.873335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.892 qpair failed and we were unable to recover it. 00:33:41.892 [2024-11-27 07:28:52.873710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.892 [2024-11-27 07:28:52.873740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.892 qpair failed and we were unable to recover it. 00:33:41.892 [2024-11-27 07:28:52.874118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.892 [2024-11-27 07:28:52.874147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.892 qpair failed and we were unable to recover it. 00:33:41.892 [2024-11-27 07:28:52.874515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.892 [2024-11-27 07:28:52.874546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.892 qpair failed and we were unable to recover it. 00:33:41.892 [2024-11-27 07:28:52.874905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.892 [2024-11-27 07:28:52.874934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.892 qpair failed and we were unable to recover it. 00:33:41.892 [2024-11-27 07:28:52.875293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.892 [2024-11-27 07:28:52.875323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.892 qpair failed and we were unable to recover it. 00:33:41.892 [2024-11-27 07:28:52.875691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.892 [2024-11-27 07:28:52.875719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.892 qpair failed and we were unable to recover it. 00:33:41.892 [2024-11-27 07:28:52.876099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.892 [2024-11-27 07:28:52.876128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.892 qpair failed and we were unable to recover it. 00:33:41.892 [2024-11-27 07:28:52.876539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.892 [2024-11-27 07:28:52.876570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.892 qpair failed and we were unable to recover it. 00:33:41.892 [2024-11-27 07:28:52.876905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.892 [2024-11-27 07:28:52.876933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.892 qpair failed and we were unable to recover it. 00:33:41.892 [2024-11-27 07:28:52.877289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.892 [2024-11-27 07:28:52.877319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.892 qpair failed and we were unable to recover it. 00:33:41.892 [2024-11-27 07:28:52.877715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.892 [2024-11-27 07:28:52.877744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.892 qpair failed and we were unable to recover it. 00:33:41.892 [2024-11-27 07:28:52.878106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.893 [2024-11-27 07:28:52.878136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.893 qpair failed and we were unable to recover it. 00:33:41.893 [2024-11-27 07:28:52.878516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.893 [2024-11-27 07:28:52.878546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.893 qpair failed and we were unable to recover it. 00:33:41.893 [2024-11-27 07:28:52.878888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.893 [2024-11-27 07:28:52.878919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.893 qpair failed and we were unable to recover it. 00:33:41.893 [2024-11-27 07:28:52.879286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.893 [2024-11-27 07:28:52.879318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.893 qpair failed and we were unable to recover it. 00:33:41.893 [2024-11-27 07:28:52.879687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.893 [2024-11-27 07:28:52.879717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.893 qpair failed and we were unable to recover it. 00:33:41.893 [2024-11-27 07:28:52.880094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.893 [2024-11-27 07:28:52.880131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.893 qpair failed and we were unable to recover it. 00:33:41.893 [2024-11-27 07:28:52.880520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.893 [2024-11-27 07:28:52.880551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.893 qpair failed and we were unable to recover it. 00:33:41.893 [2024-11-27 07:28:52.880920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.893 [2024-11-27 07:28:52.880950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.893 qpair failed and we were unable to recover it. 00:33:41.893 [2024-11-27 07:28:52.881329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.893 [2024-11-27 07:28:52.881361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.893 qpair failed and we were unable to recover it. 00:33:41.893 [2024-11-27 07:28:52.881720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.893 [2024-11-27 07:28:52.881750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.893 qpair failed and we were unable to recover it. 00:33:41.893 [2024-11-27 07:28:52.882101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.893 [2024-11-27 07:28:52.882133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.893 qpair failed and we were unable to recover it. 00:33:41.893 [2024-11-27 07:28:52.882528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.893 [2024-11-27 07:28:52.882558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.893 qpair failed and we were unable to recover it. 00:33:41.893 [2024-11-27 07:28:52.882918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.893 [2024-11-27 07:28:52.882948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.893 qpair failed and we were unable to recover it. 00:33:41.893 [2024-11-27 07:28:52.883201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.893 [2024-11-27 07:28:52.883231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.893 qpair failed and we were unable to recover it. 00:33:41.893 [2024-11-27 07:28:52.883619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.893 [2024-11-27 07:28:52.883650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.893 qpair failed and we were unable to recover it. 00:33:41.893 [2024-11-27 07:28:52.884017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.893 [2024-11-27 07:28:52.884053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.893 qpair failed and we were unable to recover it. 00:33:41.893 [2024-11-27 07:28:52.884398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.893 [2024-11-27 07:28:52.884431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.893 qpair failed and we were unable to recover it. 00:33:41.893 [2024-11-27 07:28:52.884783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.893 [2024-11-27 07:28:52.884812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.893 qpair failed and we were unable to recover it. 00:33:41.893 [2024-11-27 07:28:52.885121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.893 [2024-11-27 07:28:52.885151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.893 qpair failed and we were unable to recover it. 00:33:41.893 [2024-11-27 07:28:52.885529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.893 [2024-11-27 07:28:52.885559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.893 qpair failed and we were unable to recover it. 00:33:41.893 [2024-11-27 07:28:52.885947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.893 [2024-11-27 07:28:52.885977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.893 qpair failed and we were unable to recover it. 00:33:41.893 [2024-11-27 07:28:52.886224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.893 [2024-11-27 07:28:52.886253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.893 qpair failed and we were unable to recover it. 00:33:41.893 [2024-11-27 07:28:52.886653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.893 [2024-11-27 07:28:52.886682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.893 qpair failed and we were unable to recover it. 00:33:41.893 [2024-11-27 07:28:52.887033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.893 [2024-11-27 07:28:52.887063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.893 qpair failed and we were unable to recover it. 00:33:41.893 [2024-11-27 07:28:52.887427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.893 [2024-11-27 07:28:52.887456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.893 qpair failed and we were unable to recover it. 00:33:41.893 [2024-11-27 07:28:52.887714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.893 [2024-11-27 07:28:52.887743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.893 qpair failed and we were unable to recover it. 00:33:41.893 [2024-11-27 07:28:52.888138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.893 [2024-11-27 07:28:52.888190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.893 qpair failed and we were unable to recover it. 00:33:41.893 [2024-11-27 07:28:52.888507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.893 [2024-11-27 07:28:52.888537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.893 qpair failed and we were unable to recover it. 00:33:41.893 [2024-11-27 07:28:52.888909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.893 [2024-11-27 07:28:52.888939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.893 qpair failed and we were unable to recover it. 00:33:41.893 [2024-11-27 07:28:52.889209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.893 [2024-11-27 07:28:52.889238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.893 qpair failed and we were unable to recover it. 00:33:41.893 [2024-11-27 07:28:52.889627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.893 [2024-11-27 07:28:52.889656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.893 qpair failed and we were unable to recover it. 00:33:41.893 [2024-11-27 07:28:52.890012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.893 [2024-11-27 07:28:52.890042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.893 qpair failed and we were unable to recover it. 00:33:41.893 [2024-11-27 07:28:52.890308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.893 [2024-11-27 07:28:52.890338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.893 qpair failed and we were unable to recover it. 00:33:41.893 [2024-11-27 07:28:52.890733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.893 [2024-11-27 07:28:52.890762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.893 qpair failed and we were unable to recover it. 00:33:41.893 [2024-11-27 07:28:52.891126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.893 [2024-11-27 07:28:52.891155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.893 qpair failed and we were unable to recover it. 00:33:41.893 [2024-11-27 07:28:52.891541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.893 [2024-11-27 07:28:52.891570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.893 qpair failed and we were unable to recover it. 00:33:41.893 [2024-11-27 07:28:52.891934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.893 [2024-11-27 07:28:52.891963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.893 qpair failed and we were unable to recover it. 00:33:41.893 [2024-11-27 07:28:52.892276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.894 [2024-11-27 07:28:52.892306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.894 qpair failed and we were unable to recover it. 00:33:41.894 [2024-11-27 07:28:52.892691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.894 [2024-11-27 07:28:52.892720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.894 qpair failed and we were unable to recover it. 00:33:41.894 [2024-11-27 07:28:52.893086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.894 [2024-11-27 07:28:52.893116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.894 qpair failed and we were unable to recover it. 00:33:41.894 [2024-11-27 07:28:52.893509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.894 [2024-11-27 07:28:52.893540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.894 qpair failed and we were unable to recover it. 00:33:41.894 [2024-11-27 07:28:52.893900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.894 [2024-11-27 07:28:52.893929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.894 qpair failed and we were unable to recover it. 00:33:41.894 [2024-11-27 07:28:52.894304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.894 [2024-11-27 07:28:52.894347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.894 qpair failed and we were unable to recover it. 00:33:41.894 [2024-11-27 07:28:52.894707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.894 [2024-11-27 07:28:52.894735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.894 qpair failed and we were unable to recover it. 00:33:41.894 [2024-11-27 07:28:52.895103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.894 [2024-11-27 07:28:52.895131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.894 qpair failed and we were unable to recover it. 00:33:41.894 [2024-11-27 07:28:52.895497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.894 [2024-11-27 07:28:52.895527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.894 qpair failed and we were unable to recover it. 00:33:41.894 [2024-11-27 07:28:52.895905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.894 [2024-11-27 07:28:52.895934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.894 qpair failed and we were unable to recover it. 00:33:41.894 [2024-11-27 07:28:52.896281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.894 [2024-11-27 07:28:52.896312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.894 qpair failed and we were unable to recover it. 00:33:41.894 [2024-11-27 07:28:52.896672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.894 [2024-11-27 07:28:52.896702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.894 qpair failed and we were unable to recover it. 00:33:41.894 [2024-11-27 07:28:52.897052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.894 [2024-11-27 07:28:52.897082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.894 qpair failed and we were unable to recover it. 00:33:41.894 [2024-11-27 07:28:52.897433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.894 [2024-11-27 07:28:52.897464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.894 qpair failed and we were unable to recover it. 00:33:41.894 [2024-11-27 07:28:52.897807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.894 [2024-11-27 07:28:52.897837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.894 qpair failed and we were unable to recover it. 00:33:41.894 [2024-11-27 07:28:52.898190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.894 [2024-11-27 07:28:52.898220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.894 qpair failed and we were unable to recover it. 00:33:41.894 [2024-11-27 07:28:52.898594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.894 [2024-11-27 07:28:52.898622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.894 qpair failed and we were unable to recover it. 00:33:41.894 [2024-11-27 07:28:52.898971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.894 [2024-11-27 07:28:52.899001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.894 qpair failed and we were unable to recover it. 00:33:41.894 [2024-11-27 07:28:52.899347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.894 [2024-11-27 07:28:52.899380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.894 qpair failed and we were unable to recover it. 00:33:41.894 [2024-11-27 07:28:52.899748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.894 [2024-11-27 07:28:52.899777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.894 qpair failed and we were unable to recover it. 00:33:41.894 [2024-11-27 07:28:52.900145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.894 [2024-11-27 07:28:52.900186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.894 qpair failed and we were unable to recover it. 00:33:41.894 [2024-11-27 07:28:52.900543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.894 [2024-11-27 07:28:52.900572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.894 qpair failed and we were unable to recover it. 00:33:41.894 [2024-11-27 07:28:52.900935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.894 [2024-11-27 07:28:52.900964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.894 qpair failed and we were unable to recover it. 00:33:41.894 [2024-11-27 07:28:52.901321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.894 [2024-11-27 07:28:52.901351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.894 qpair failed and we were unable to recover it. 00:33:41.894 [2024-11-27 07:28:52.901716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.894 [2024-11-27 07:28:52.901745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.894 qpair failed and we were unable to recover it. 00:33:41.894 [2024-11-27 07:28:52.902108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.894 [2024-11-27 07:28:52.902137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.894 qpair failed and we were unable to recover it. 00:33:41.894 [2024-11-27 07:28:52.902529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.894 [2024-11-27 07:28:52.902559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.894 qpair failed and we were unable to recover it. 00:33:41.894 [2024-11-27 07:28:52.902913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.894 [2024-11-27 07:28:52.902942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.894 qpair failed and we were unable to recover it. 00:33:41.894 [2024-11-27 07:28:52.903309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.894 [2024-11-27 07:28:52.903340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.894 qpair failed and we were unable to recover it. 00:33:41.894 [2024-11-27 07:28:52.903675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.894 [2024-11-27 07:28:52.903705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.894 qpair failed and we were unable to recover it. 00:33:41.894 [2024-11-27 07:28:52.904071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.894 [2024-11-27 07:28:52.904100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.894 qpair failed and we were unable to recover it. 00:33:41.894 [2024-11-27 07:28:52.904462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.894 [2024-11-27 07:28:52.904492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.894 qpair failed and we were unable to recover it. 00:33:41.894 [2024-11-27 07:28:52.904883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.894 [2024-11-27 07:28:52.904920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.894 qpair failed and we were unable to recover it. 00:33:41.894 [2024-11-27 07:28:52.905293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.894 [2024-11-27 07:28:52.905324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.894 qpair failed and we were unable to recover it. 00:33:41.894 [2024-11-27 07:28:52.905664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.894 [2024-11-27 07:28:52.905693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.894 qpair failed and we were unable to recover it. 00:33:41.894 [2024-11-27 07:28:52.906045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.894 [2024-11-27 07:28:52.906074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.894 qpair failed and we were unable to recover it. 00:33:41.894 [2024-11-27 07:28:52.906462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.894 [2024-11-27 07:28:52.906492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.894 qpair failed and we were unable to recover it. 00:33:41.894 [2024-11-27 07:28:52.906839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.894 [2024-11-27 07:28:52.906868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.895 qpair failed and we were unable to recover it. 00:33:41.895 [2024-11-27 07:28:52.907232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.895 [2024-11-27 07:28:52.907262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.895 qpair failed and we were unable to recover it. 00:33:41.895 [2024-11-27 07:28:52.907652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.895 [2024-11-27 07:28:52.907682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.895 qpair failed and we were unable to recover it. 00:33:41.895 [2024-11-27 07:28:52.908057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.895 [2024-11-27 07:28:52.908086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.895 qpair failed and we were unable to recover it. 00:33:41.895 [2024-11-27 07:28:52.908326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.895 [2024-11-27 07:28:52.908355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.895 qpair failed and we were unable to recover it. 00:33:41.895 [2024-11-27 07:28:52.908595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.895 [2024-11-27 07:28:52.908624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.895 qpair failed and we were unable to recover it. 00:33:41.895 [2024-11-27 07:28:52.908970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.895 [2024-11-27 07:28:52.908999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.895 qpair failed and we were unable to recover it. 00:33:41.895 [2024-11-27 07:28:52.909377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.895 [2024-11-27 07:28:52.909407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.895 qpair failed and we were unable to recover it. 00:33:41.895 [2024-11-27 07:28:52.909768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.895 [2024-11-27 07:28:52.909797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.895 qpair failed and we were unable to recover it. 00:33:41.895 [2024-11-27 07:28:52.910186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.895 [2024-11-27 07:28:52.910216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.895 qpair failed and we were unable to recover it. 00:33:41.895 [2024-11-27 07:28:52.910594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.895 [2024-11-27 07:28:52.910623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.895 qpair failed and we were unable to recover it. 00:33:41.895 [2024-11-27 07:28:52.910999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.895 [2024-11-27 07:28:52.911029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.895 qpair failed and we were unable to recover it. 00:33:41.895 [2024-11-27 07:28:52.911395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.895 [2024-11-27 07:28:52.911428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.895 qpair failed and we were unable to recover it. 00:33:41.895 [2024-11-27 07:28:52.911788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.895 [2024-11-27 07:28:52.911817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.895 qpair failed and we were unable to recover it. 00:33:41.895 [2024-11-27 07:28:52.912178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.895 [2024-11-27 07:28:52.912208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.895 qpair failed and we were unable to recover it. 00:33:41.895 [2024-11-27 07:28:52.912555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.895 [2024-11-27 07:28:52.912586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.895 qpair failed and we were unable to recover it. 00:33:41.895 [2024-11-27 07:28:52.912945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.895 [2024-11-27 07:28:52.912974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.895 qpair failed and we were unable to recover it. 00:33:41.895 [2024-11-27 07:28:52.913324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.895 [2024-11-27 07:28:52.913353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.895 qpair failed and we were unable to recover it. 00:33:41.895 [2024-11-27 07:28:52.913728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.895 [2024-11-27 07:28:52.913756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.895 qpair failed and we were unable to recover it. 00:33:41.895 [2024-11-27 07:28:52.913992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.895 [2024-11-27 07:28:52.914021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.895 qpair failed and we were unable to recover it. 00:33:41.895 [2024-11-27 07:28:52.914331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.895 [2024-11-27 07:28:52.914361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.895 qpair failed and we were unable to recover it. 00:33:41.895 [2024-11-27 07:28:52.914586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.895 [2024-11-27 07:28:52.914614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.895 qpair failed and we were unable to recover it. 00:33:41.895 [2024-11-27 07:28:52.914977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.895 [2024-11-27 07:28:52.915006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.895 qpair failed and we were unable to recover it. 00:33:41.895 [2024-11-27 07:28:52.915291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.895 [2024-11-27 07:28:52.915322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.895 qpair failed and we were unable to recover it. 00:33:41.895 [2024-11-27 07:28:52.915707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.895 [2024-11-27 07:28:52.915737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.895 qpair failed and we were unable to recover it. 00:33:41.895 [2024-11-27 07:28:52.916110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.895 [2024-11-27 07:28:52.916139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.895 qpair failed and we were unable to recover it. 00:33:41.895 [2024-11-27 07:28:52.916488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.895 [2024-11-27 07:28:52.916520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.895 qpair failed and we were unable to recover it. 00:33:41.895 [2024-11-27 07:28:52.916883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.895 [2024-11-27 07:28:52.916912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.895 qpair failed and we were unable to recover it. 00:33:41.895 [2024-11-27 07:28:52.917277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.895 [2024-11-27 07:28:52.917307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.895 qpair failed and we were unable to recover it. 00:33:41.895 [2024-11-27 07:28:52.917664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.895 [2024-11-27 07:28:52.917694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.895 qpair failed and we were unable to recover it. 00:33:41.895 [2024-11-27 07:28:52.918054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.895 [2024-11-27 07:28:52.918084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.895 qpair failed and we were unable to recover it. 00:33:41.895 [2024-11-27 07:28:52.918470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.895 [2024-11-27 07:28:52.918499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.895 qpair failed and we were unable to recover it. 00:33:41.895 [2024-11-27 07:28:52.918847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.895 [2024-11-27 07:28:52.918876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.895 qpair failed and we were unable to recover it. 00:33:41.895 [2024-11-27 07:28:52.919101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.896 [2024-11-27 07:28:52.919130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.896 qpair failed and we were unable to recover it. 00:33:41.896 [2024-11-27 07:28:52.919535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.896 [2024-11-27 07:28:52.919565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.896 qpair failed and we were unable to recover it. 00:33:41.896 [2024-11-27 07:28:52.919874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.896 [2024-11-27 07:28:52.919904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.896 qpair failed and we were unable to recover it. 00:33:41.896 [2024-11-27 07:28:52.920277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.896 [2024-11-27 07:28:52.920308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.896 qpair failed and we were unable to recover it. 00:33:41.896 [2024-11-27 07:28:52.920537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.896 [2024-11-27 07:28:52.920565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.896 qpair failed and we were unable to recover it. 00:33:41.896 [2024-11-27 07:28:52.920940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.896 [2024-11-27 07:28:52.920970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.896 qpair failed and we were unable to recover it. 00:33:41.896 [2024-11-27 07:28:52.921261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.896 [2024-11-27 07:28:52.921292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.896 qpair failed and we were unable to recover it. 00:33:41.896 [2024-11-27 07:28:52.921691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.896 [2024-11-27 07:28:52.921720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.896 qpair failed and we were unable to recover it. 00:33:41.896 [2024-11-27 07:28:52.922069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.896 [2024-11-27 07:28:52.922098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.896 qpair failed and we were unable to recover it. 00:33:41.896 [2024-11-27 07:28:52.922450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.896 [2024-11-27 07:28:52.922480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.896 qpair failed and we were unable to recover it. 00:33:41.896 [2024-11-27 07:28:52.922830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.896 [2024-11-27 07:28:52.922859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.896 qpair failed and we were unable to recover it. 00:33:41.896 [2024-11-27 07:28:52.923226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.896 [2024-11-27 07:28:52.923257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.896 qpair failed and we were unable to recover it. 00:33:41.896 [2024-11-27 07:28:52.923690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.896 [2024-11-27 07:28:52.923719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.896 qpair failed and we were unable to recover it. 00:33:41.896 [2024-11-27 07:28:52.924072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.896 [2024-11-27 07:28:52.924101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.896 qpair failed and we were unable to recover it. 00:33:41.896 [2024-11-27 07:28:52.924552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.896 [2024-11-27 07:28:52.924583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.896 qpair failed and we were unable to recover it. 00:33:41.896 [2024-11-27 07:28:52.924927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.896 [2024-11-27 07:28:52.924957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.896 qpair failed and we were unable to recover it. 00:33:41.896 [2024-11-27 07:28:52.925326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.896 [2024-11-27 07:28:52.925357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.896 qpair failed and we were unable to recover it. 00:33:41.896 [2024-11-27 07:28:52.925718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.896 [2024-11-27 07:28:52.925747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.896 qpair failed and we were unable to recover it. 00:33:41.896 [2024-11-27 07:28:52.926103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.896 [2024-11-27 07:28:52.926132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.896 qpair failed and we were unable to recover it. 00:33:41.896 [2024-11-27 07:28:52.926491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.896 [2024-11-27 07:28:52.926520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.896 qpair failed and we were unable to recover it. 00:33:41.896 [2024-11-27 07:28:52.926872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.896 [2024-11-27 07:28:52.926901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.896 qpair failed and we were unable to recover it. 00:33:41.896 [2024-11-27 07:28:52.927231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.896 [2024-11-27 07:28:52.927260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.896 qpair failed and we were unable to recover it. 00:33:41.896 [2024-11-27 07:28:52.927618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.896 [2024-11-27 07:28:52.927647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.896 qpair failed and we were unable to recover it. 00:33:41.896 [2024-11-27 07:28:52.928015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.896 [2024-11-27 07:28:52.928045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.896 qpair failed and we were unable to recover it. 00:33:41.896 [2024-11-27 07:28:52.928387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.896 [2024-11-27 07:28:52.928416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.896 qpair failed and we were unable to recover it. 00:33:41.896 [2024-11-27 07:28:52.928775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.896 [2024-11-27 07:28:52.928804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.896 qpair failed and we were unable to recover it. 00:33:41.896 [2024-11-27 07:28:52.929177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.896 [2024-11-27 07:28:52.929208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.896 qpair failed and we were unable to recover it. 00:33:41.896 [2024-11-27 07:28:52.929435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.896 [2024-11-27 07:28:52.929469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.896 qpair failed and we were unable to recover it. 00:33:41.896 [2024-11-27 07:28:52.929810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.896 [2024-11-27 07:28:52.929840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.896 qpair failed and we were unable to recover it. 00:33:41.896 [2024-11-27 07:28:52.930197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.896 [2024-11-27 07:28:52.930228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.896 qpair failed and we were unable to recover it. 00:33:41.896 [2024-11-27 07:28:52.930595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.896 [2024-11-27 07:28:52.930632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.896 qpair failed and we were unable to recover it. 00:33:41.896 [2024-11-27 07:28:52.930983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.896 [2024-11-27 07:28:52.931014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.896 qpair failed and we were unable to recover it. 00:33:41.896 [2024-11-27 07:28:52.931350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.896 [2024-11-27 07:28:52.931381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.896 qpair failed and we were unable to recover it. 00:33:41.896 [2024-11-27 07:28:52.931649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.896 [2024-11-27 07:28:52.931677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.896 qpair failed and we were unable to recover it. 00:33:41.896 [2024-11-27 07:28:52.932029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.896 [2024-11-27 07:28:52.932058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.896 qpair failed and we were unable to recover it. 00:33:41.896 [2024-11-27 07:28:52.932401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.896 [2024-11-27 07:28:52.932433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.896 qpair failed and we were unable to recover it. 00:33:41.896 [2024-11-27 07:28:52.932794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.896 [2024-11-27 07:28:52.932823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.896 qpair failed and we were unable to recover it. 00:33:41.896 [2024-11-27 07:28:52.933196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.897 [2024-11-27 07:28:52.933226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.897 qpair failed and we were unable to recover it. 00:33:41.897 [2024-11-27 07:28:52.933547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.897 [2024-11-27 07:28:52.933576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.897 qpair failed and we were unable to recover it. 00:33:41.897 [2024-11-27 07:28:52.933956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.897 [2024-11-27 07:28:52.933985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.897 qpair failed and we were unable to recover it. 00:33:41.897 [2024-11-27 07:28:52.934343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.897 [2024-11-27 07:28:52.934373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.897 qpair failed and we were unable to recover it. 00:33:41.897 [2024-11-27 07:28:52.934729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.897 [2024-11-27 07:28:52.934759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.897 qpair failed and we were unable to recover it. 00:33:41.897 [2024-11-27 07:28:52.935141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.897 [2024-11-27 07:28:52.935181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.897 qpair failed and we were unable to recover it. 00:33:41.897 [2024-11-27 07:28:52.935427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.897 [2024-11-27 07:28:52.935455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.897 qpair failed and we were unable to recover it. 00:33:41.897 [2024-11-27 07:28:52.935586] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:41.897 [2024-11-27 07:28:52.935643] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:41.897 [2024-11-27 07:28:52.935654] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:41.897 [2024-11-27 07:28:52.935664] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:41.897 [2024-11-27 07:28:52.935672] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:41.897 [2024-11-27 07:28:52.935817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.897 [2024-11-27 07:28:52.935847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.897 qpair failed and we were unable to recover it. 00:33:41.897 [2024-11-27 07:28:52.936206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.897 [2024-11-27 07:28:52.936236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.897 qpair failed and we were unable to recover it. 00:33:41.897 [2024-11-27 07:28:52.936601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.897 [2024-11-27 07:28:52.936629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.897 qpair failed and we were unable to recover it. 00:33:41.897 [2024-11-27 07:28:52.936989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.897 [2024-11-27 07:28:52.937017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.897 qpair failed and we were unable to recover it. 00:33:41.897 [2024-11-27 07:28:52.937250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.897 [2024-11-27 07:28:52.937279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.897 qpair failed and we were unable to recover it. 00:33:41.897 [2024-11-27 07:28:52.937659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.897 [2024-11-27 07:28:52.937687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.897 qpair failed and we were unable to recover it. 00:33:41.897 [2024-11-27 07:28:52.938049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.897 [2024-11-27 07:28:52.938077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.897 qpair failed and we were unable to recover it. 00:33:41.897 [2024-11-27 07:28:52.938045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:41.897 [2024-11-27 07:28:52.938223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:41.897 [2024-11-27 07:28:52.938374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.897 [2024-11-27 07:28:52.938404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.897 qpair failed and we were unable to recover it. 00:33:41.897 [2024-11-27 07:28:52.938373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:41.897 [2024-11-27 07:28:52.938377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:41.897 [2024-11-27 07:28:52.938784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.897 [2024-11-27 07:28:52.938815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.897 qpair failed and we were unable to recover it. 00:33:41.897 [2024-11-27 07:28:52.939177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.897 [2024-11-27 07:28:52.939207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.897 qpair failed and we were unable to recover it. 00:33:41.897 [2024-11-27 07:28:52.939533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.897 [2024-11-27 07:28:52.939565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.897 qpair failed and we were unable to recover it. 00:33:41.897 [2024-11-27 07:28:52.939914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.897 [2024-11-27 07:28:52.939943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.897 qpair failed and we were unable to recover it. 00:33:41.897 [2024-11-27 07:28:52.940290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.897 [2024-11-27 07:28:52.940321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.897 qpair failed and we were unable to recover it. 00:33:41.897 [2024-11-27 07:28:52.940564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.897 [2024-11-27 07:28:52.940597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.897 qpair failed and we were unable to recover it. 00:33:41.897 [2024-11-27 07:28:52.940898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.897 [2024-11-27 07:28:52.940929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.897 qpair failed and we were unable to recover it. 00:33:41.897 [2024-11-27 07:28:52.941289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.897 [2024-11-27 07:28:52.941319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.897 qpair failed and we were unable to recover it. 00:33:41.897 [2024-11-27 07:28:52.941698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.897 [2024-11-27 07:28:52.941727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.897 qpair failed and we were unable to recover it. 00:33:41.897 [2024-11-27 07:28:52.941989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.897 [2024-11-27 07:28:52.942018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.897 qpair failed and we were unable to recover it. 00:33:41.897 [2024-11-27 07:28:52.942299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.897 [2024-11-27 07:28:52.942329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.897 qpair failed and we were unable to recover it. 00:33:41.897 [2024-11-27 07:28:52.942787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.897 [2024-11-27 07:28:52.942816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.897 qpair failed and we were unable to recover it. 00:33:41.897 [2024-11-27 07:28:52.943184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.897 [2024-11-27 07:28:52.943213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.897 qpair failed and we were unable to recover it. 00:33:41.897 [2024-11-27 07:28:52.943588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.897 [2024-11-27 07:28:52.943616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.897 qpair failed and we were unable to recover it. 00:33:41.897 [2024-11-27 07:28:52.943991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.897 [2024-11-27 07:28:52.944021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.897 qpair failed and we were unable to recover it. 00:33:41.897 [2024-11-27 07:28:52.944389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.897 [2024-11-27 07:28:52.944421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.897 qpair failed and we were unable to recover it. 00:33:41.897 [2024-11-27 07:28:52.944671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.897 [2024-11-27 07:28:52.944704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.897 qpair failed and we were unable to recover it. 00:33:41.897 [2024-11-27 07:28:52.945016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.897 [2024-11-27 07:28:52.945053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.897 qpair failed and we were unable to recover it. 00:33:41.897 [2024-11-27 07:28:52.945302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.897 [2024-11-27 07:28:52.945333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.897 qpair failed and we were unable to recover it. 00:33:41.897 [2024-11-27 07:28:52.945712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.897 [2024-11-27 07:28:52.945741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.897 qpair failed and we were unable to recover it. 00:33:41.898 [2024-11-27 07:28:52.946148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.898 [2024-11-27 07:28:52.946189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.898 qpair failed and we were unable to recover it. 00:33:41.898 [2024-11-27 07:28:52.946435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.898 [2024-11-27 07:28:52.946464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.898 qpair failed and we were unable to recover it. 00:33:41.898 [2024-11-27 07:28:52.946845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.898 [2024-11-27 07:28:52.946874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.898 qpair failed and we were unable to recover it. 00:33:41.898 [2024-11-27 07:28:52.947239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.898 [2024-11-27 07:28:52.947270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.898 qpair failed and we were unable to recover it. 00:33:41.898 [2024-11-27 07:28:52.947647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.898 [2024-11-27 07:28:52.947676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.898 qpair failed and we were unable to recover it. 00:33:41.898 [2024-11-27 07:28:52.948058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.898 [2024-11-27 07:28:52.948086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.898 qpair failed and we were unable to recover it. 00:33:41.898 [2024-11-27 07:28:52.948477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.898 [2024-11-27 07:28:52.948507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.898 qpair failed and we were unable to recover it. 00:33:41.898 [2024-11-27 07:28:52.948866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.898 [2024-11-27 07:28:52.948895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.898 qpair failed and we were unable to recover it. 00:33:41.898 [2024-11-27 07:28:52.949257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.898 [2024-11-27 07:28:52.949287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.898 qpair failed and we were unable to recover it. 00:33:41.898 [2024-11-27 07:28:52.949657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.898 [2024-11-27 07:28:52.949693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.898 qpair failed and we were unable to recover it. 00:33:41.898 [2024-11-27 07:28:52.949934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.898 [2024-11-27 07:28:52.949967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.898 qpair failed and we were unable to recover it. 00:33:41.898 [2024-11-27 07:28:52.950200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.898 [2024-11-27 07:28:52.950230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.898 qpair failed and we were unable to recover it. 00:33:41.898 [2024-11-27 07:28:52.950513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.898 [2024-11-27 07:28:52.950541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.898 qpair failed and we were unable to recover it. 00:33:41.898 [2024-11-27 07:28:52.950905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.898 [2024-11-27 07:28:52.950936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.898 qpair failed and we were unable to recover it. 00:33:41.898 [2024-11-27 07:28:52.951344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.898 [2024-11-27 07:28:52.951374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.898 qpair failed and we were unable to recover it. 00:33:41.898 [2024-11-27 07:28:52.951621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.898 [2024-11-27 07:28:52.951649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.898 qpair failed and we were unable to recover it. 00:33:41.898 [2024-11-27 07:28:52.951995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.898 [2024-11-27 07:28:52.952026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.898 qpair failed and we were unable to recover it. 00:33:41.898 [2024-11-27 07:28:52.952359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.898 [2024-11-27 07:28:52.952390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.898 qpair failed and we were unable to recover it. 00:33:41.898 [2024-11-27 07:28:52.952741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.898 [2024-11-27 07:28:52.952771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.898 qpair failed and we were unable to recover it. 00:33:41.898 [2024-11-27 07:28:52.953102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.898 [2024-11-27 07:28:52.953133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.898 qpair failed and we were unable to recover it. 00:33:41.898 [2024-11-27 07:28:52.953475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.898 [2024-11-27 07:28:52.953506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.898 qpair failed and we were unable to recover it. 00:33:41.898 [2024-11-27 07:28:52.953846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.898 [2024-11-27 07:28:52.953874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.898 qpair failed and we were unable to recover it. 00:33:41.898 [2024-11-27 07:28:52.954237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.898 [2024-11-27 07:28:52.954267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.898 qpair failed and we were unable to recover it. 00:33:41.898 [2024-11-27 07:28:52.954601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.898 [2024-11-27 07:28:52.954631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.898 qpair failed and we were unable to recover it. 00:33:41.898 [2024-11-27 07:28:52.954991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.898 [2024-11-27 07:28:52.955021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.898 qpair failed and we were unable to recover it. 00:33:41.898 [2024-11-27 07:28:52.955362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.898 [2024-11-27 07:28:52.955393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.898 qpair failed and we were unable to recover it. 00:33:41.898 [2024-11-27 07:28:52.955521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.898 [2024-11-27 07:28:52.955551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.898 qpair failed and we were unable to recover it. 00:33:41.898 [2024-11-27 07:28:52.955913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.898 [2024-11-27 07:28:52.955942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.898 qpair failed and we were unable to recover it. 00:33:41.898 [2024-11-27 07:28:52.956171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.898 [2024-11-27 07:28:52.956201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.898 qpair failed and we were unable to recover it. 00:33:41.898 [2024-11-27 07:28:52.956532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.898 [2024-11-27 07:28:52.956562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.898 qpair failed and we were unable to recover it. 00:33:41.898 [2024-11-27 07:28:52.956929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.898 [2024-11-27 07:28:52.956958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.898 qpair failed and we were unable to recover it. 00:33:41.898 [2024-11-27 07:28:52.957316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.898 [2024-11-27 07:28:52.957346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.898 qpair failed and we were unable to recover it. 00:33:41.898 [2024-11-27 07:28:52.957718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.898 [2024-11-27 07:28:52.957747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.898 qpair failed and we were unable to recover it. 00:33:41.898 [2024-11-27 07:28:52.958118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.898 [2024-11-27 07:28:52.958148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.898 qpair failed and we were unable to recover it. 00:33:41.898 [2024-11-27 07:28:52.958516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.898 [2024-11-27 07:28:52.958546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.898 qpair failed and we were unable to recover it. 00:33:41.898 [2024-11-27 07:28:52.958904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.898 [2024-11-27 07:28:52.958934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.898 qpair failed and we were unable to recover it. 00:33:41.898 [2024-11-27 07:28:52.959289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.899 [2024-11-27 07:28:52.959325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.899 qpair failed and we were unable to recover it. 00:33:41.899 [2024-11-27 07:28:52.959565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.899 [2024-11-27 07:28:52.959594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.899 qpair failed and we were unable to recover it. 00:33:41.899 [2024-11-27 07:28:52.959871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.899 [2024-11-27 07:28:52.959903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.899 qpair failed and we were unable to recover it. 00:33:41.899 [2024-11-27 07:28:52.960052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.899 [2024-11-27 07:28:52.960080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.899 qpair failed and we were unable to recover it. 00:33:41.899 [2024-11-27 07:28:52.960495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.899 [2024-11-27 07:28:52.960526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.899 qpair failed and we were unable to recover it. 00:33:41.899 [2024-11-27 07:28:52.960885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.899 [2024-11-27 07:28:52.960913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.899 qpair failed and we were unable to recover it. 00:33:41.899 [2024-11-27 07:28:52.961345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.899 [2024-11-27 07:28:52.961377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.899 qpair failed and we were unable to recover it. 00:33:41.899 [2024-11-27 07:28:52.961616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.899 [2024-11-27 07:28:52.961646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.899 qpair failed and we were unable to recover it. 00:33:41.899 [2024-11-27 07:28:52.962037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.899 [2024-11-27 07:28:52.962068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.899 qpair failed and we were unable to recover it. 00:33:41.899 [2024-11-27 07:28:52.962425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.899 [2024-11-27 07:28:52.962455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.899 qpair failed and we were unable to recover it. 00:33:41.899 [2024-11-27 07:28:52.962819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.899 [2024-11-27 07:28:52.962849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.899 qpair failed and we were unable to recover it. 00:33:41.899 [2024-11-27 07:28:52.963195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.899 [2024-11-27 07:28:52.963227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.899 qpair failed and we were unable to recover it. 00:33:41.899 [2024-11-27 07:28:52.963585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.899 [2024-11-27 07:28:52.963614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.899 qpair failed and we were unable to recover it. 00:33:41.899 [2024-11-27 07:28:52.963912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.899 [2024-11-27 07:28:52.963940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.899 qpair failed and we were unable to recover it. 00:33:41.899 [2024-11-27 07:28:52.964294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.899 [2024-11-27 07:28:52.964327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.899 qpair failed and we were unable to recover it. 00:33:41.899 [2024-11-27 07:28:52.964713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.899 [2024-11-27 07:28:52.964743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.899 qpair failed and we were unable to recover it. 00:33:41.899 [2024-11-27 07:28:52.965094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.899 [2024-11-27 07:28:52.965123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.899 qpair failed and we were unable to recover it. 00:33:41.899 [2024-11-27 07:28:52.965492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.899 [2024-11-27 07:28:52.965524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.899 qpair failed and we were unable to recover it. 00:33:41.899 [2024-11-27 07:28:52.965892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.899 [2024-11-27 07:28:52.965922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.899 qpair failed and we were unable to recover it. 00:33:41.899 [2024-11-27 07:28:52.966296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.899 [2024-11-27 07:28:52.966327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.899 qpair failed and we were unable to recover it. 00:33:41.899 [2024-11-27 07:28:52.966675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.899 [2024-11-27 07:28:52.966706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.899 qpair failed and we were unable to recover it. 00:33:41.899 [2024-11-27 07:28:52.967066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.899 [2024-11-27 07:28:52.967095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.899 qpair failed and we were unable to recover it. 00:33:41.899 [2024-11-27 07:28:52.967302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.899 [2024-11-27 07:28:52.967332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.899 qpair failed and we were unable to recover it. 00:33:41.899 [2024-11-27 07:28:52.967718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.899 [2024-11-27 07:28:52.967747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.899 qpair failed and we were unable to recover it. 00:33:41.899 [2024-11-27 07:28:52.968106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.899 [2024-11-27 07:28:52.968138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.899 qpair failed and we were unable to recover it. 00:33:41.899 [2024-11-27 07:28:52.968526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.899 [2024-11-27 07:28:52.968558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.899 qpair failed and we were unable to recover it. 00:33:41.899 [2024-11-27 07:28:52.968787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.899 [2024-11-27 07:28:52.968815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.899 qpair failed and we were unable to recover it. 00:33:41.899 [2024-11-27 07:28:52.969130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.899 [2024-11-27 07:28:52.969170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.899 qpair failed and we were unable to recover it. 00:33:41.899 [2024-11-27 07:28:52.969396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.899 [2024-11-27 07:28:52.969426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.899 qpair failed and we were unable to recover it. 00:33:41.899 [2024-11-27 07:28:52.969813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.899 [2024-11-27 07:28:52.969842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.899 qpair failed and we were unable to recover it. 00:33:41.899 [2024-11-27 07:28:52.970207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.899 [2024-11-27 07:28:52.970239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.899 qpair failed and we were unable to recover it. 00:33:41.899 [2024-11-27 07:28:52.970454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.899 [2024-11-27 07:28:52.970483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.899 qpair failed and we were unable to recover it. 00:33:41.899 [2024-11-27 07:28:52.970837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.899 [2024-11-27 07:28:52.970867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.899 qpair failed and we were unable to recover it. 00:33:41.899 [2024-11-27 07:28:52.971239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.899 [2024-11-27 07:28:52.971269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.899 qpair failed and we were unable to recover it. 00:33:41.899 [2024-11-27 07:28:52.971622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.899 [2024-11-27 07:28:52.971653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.899 qpair failed and we were unable to recover it. 00:33:41.899 [2024-11-27 07:28:52.972011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.899 [2024-11-27 07:28:52.972041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.899 qpair failed and we were unable to recover it. 00:33:41.899 [2024-11-27 07:28:52.972387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.899 [2024-11-27 07:28:52.972417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.899 qpair failed and we were unable to recover it. 00:33:41.899 [2024-11-27 07:28:52.972776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.899 [2024-11-27 07:28:52.972805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.899 qpair failed and we were unable to recover it. 00:33:41.899 [2024-11-27 07:28:52.973193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.900 [2024-11-27 07:28:52.973223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.900 qpair failed and we were unable to recover it. 00:33:41.900 [2024-11-27 07:28:52.973474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.900 [2024-11-27 07:28:52.973502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.900 qpair failed and we were unable to recover it. 00:33:41.900 [2024-11-27 07:28:52.973845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.900 [2024-11-27 07:28:52.973875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.900 qpair failed and we were unable to recover it. 00:33:41.900 [2024-11-27 07:28:52.974129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.900 [2024-11-27 07:28:52.974172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.900 qpair failed and we were unable to recover it. 00:33:41.900 [2024-11-27 07:28:52.974493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.900 [2024-11-27 07:28:52.974524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.900 qpair failed and we were unable to recover it. 00:33:41.900 [2024-11-27 07:28:52.974898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.900 [2024-11-27 07:28:52.974928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.900 qpair failed and we were unable to recover it. 00:33:41.900 [2024-11-27 07:28:52.975283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.900 [2024-11-27 07:28:52.975315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.900 qpair failed and we were unable to recover it. 00:33:41.900 [2024-11-27 07:28:52.975685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.900 [2024-11-27 07:28:52.975715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.900 qpair failed and we were unable to recover it. 00:33:41.900 [2024-11-27 07:28:52.976072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.900 [2024-11-27 07:28:52.976101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.900 qpair failed and we were unable to recover it. 00:33:41.900 [2024-11-27 07:28:52.976364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.900 [2024-11-27 07:28:52.976399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.900 qpair failed and we were unable to recover it. 00:33:41.900 [2024-11-27 07:28:52.976738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.900 [2024-11-27 07:28:52.976769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.900 qpair failed and we were unable to recover it. 00:33:41.900 [2024-11-27 07:28:52.977134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.900 [2024-11-27 07:28:52.977177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.900 qpair failed and we were unable to recover it. 00:33:41.900 [2024-11-27 07:28:52.977508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.900 [2024-11-27 07:28:52.977538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.900 qpair failed and we were unable to recover it. 00:33:41.900 [2024-11-27 07:28:52.977870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.900 [2024-11-27 07:28:52.977900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.900 qpair failed and we were unable to recover it. 00:33:41.900 [2024-11-27 07:28:52.978217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.900 [2024-11-27 07:28:52.978247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.900 qpair failed and we were unable to recover it. 00:33:41.900 [2024-11-27 07:28:52.978612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.900 [2024-11-27 07:28:52.978642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.900 qpair failed and we were unable to recover it. 00:33:41.900 [2024-11-27 07:28:52.978896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.900 [2024-11-27 07:28:52.978925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.900 qpair failed and we were unable to recover it. 00:33:41.900 [2024-11-27 07:28:52.979286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.900 [2024-11-27 07:28:52.979320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.900 qpair failed and we were unable to recover it. 00:33:41.900 [2024-11-27 07:28:52.979572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.900 [2024-11-27 07:28:52.979601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.900 qpair failed and we were unable to recover it. 00:33:41.900 [2024-11-27 07:28:52.979859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.900 [2024-11-27 07:28:52.979888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.900 qpair failed and we were unable to recover it. 00:33:41.900 [2024-11-27 07:28:52.980237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.900 [2024-11-27 07:28:52.980269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.900 qpair failed and we were unable to recover it. 00:33:41.900 [2024-11-27 07:28:52.980606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.900 [2024-11-27 07:28:52.980638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.900 qpair failed and we were unable to recover it. 00:33:41.900 [2024-11-27 07:28:52.981016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.900 [2024-11-27 07:28:52.981046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.900 qpair failed and we were unable to recover it. 00:33:41.900 [2024-11-27 07:28:52.981404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.900 [2024-11-27 07:28:52.981436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.900 qpair failed and we were unable to recover it. 00:33:41.900 [2024-11-27 07:28:52.981782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.900 [2024-11-27 07:28:52.981811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.900 qpair failed and we were unable to recover it. 00:33:41.900 [2024-11-27 07:28:52.982178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.900 [2024-11-27 07:28:52.982210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.900 qpair failed and we were unable to recover it. 00:33:41.900 [2024-11-27 07:28:52.982574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.900 [2024-11-27 07:28:52.982607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.900 qpair failed and we were unable to recover it. 00:33:41.900 [2024-11-27 07:28:52.982981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.900 [2024-11-27 07:28:52.983010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.900 qpair failed and we were unable to recover it. 00:33:41.900 [2024-11-27 07:28:52.983397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.900 [2024-11-27 07:28:52.983429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.900 qpair failed and we were unable to recover it. 00:33:41.900 [2024-11-27 07:28:52.983671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.900 [2024-11-27 07:28:52.983701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.900 qpair failed and we were unable to recover it. 00:33:41.900 [2024-11-27 07:28:52.984061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.900 [2024-11-27 07:28:52.984097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.900 qpair failed and we were unable to recover it. 00:33:41.900 [2024-11-27 07:28:52.984325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.900 [2024-11-27 07:28:52.984356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.900 qpair failed and we were unable to recover it. 00:33:41.900 [2024-11-27 07:28:52.984586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.900 [2024-11-27 07:28:52.984616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.900 qpair failed and we were unable to recover it. 00:33:41.900 [2024-11-27 07:28:52.984967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.900 [2024-11-27 07:28:52.984995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.900 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.985237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-11-27 07:28:52.985267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.985617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-11-27 07:28:52.985647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.986000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-11-27 07:28:52.986030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.986330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-11-27 07:28:52.986360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.986729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-11-27 07:28:52.986758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.987117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-11-27 07:28:52.987147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.987544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-11-27 07:28:52.987575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.987807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-11-27 07:28:52.987836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.988099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-11-27 07:28:52.988129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.988363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-11-27 07:28:52.988395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.988607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-11-27 07:28:52.988639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.988977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-11-27 07:28:52.989007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.989270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-11-27 07:28:52.989306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.989585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-11-27 07:28:52.989615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.989867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-11-27 07:28:52.989895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.990238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-11-27 07:28:52.990270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.990507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-11-27 07:28:52.990535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.990758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-11-27 07:28:52.990789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.991180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-11-27 07:28:52.991210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.991547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-11-27 07:28:52.991577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.991960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-11-27 07:28:52.991989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.992258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-11-27 07:28:52.992287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.992684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-11-27 07:28:52.992715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.993076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-11-27 07:28:52.993114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.993382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-11-27 07:28:52.993414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.993640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-11-27 07:28:52.993670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.993910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-11-27 07:28:52.993941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.994297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-11-27 07:28:52.994329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.994687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-11-27 07:28:52.994719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.995085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-11-27 07:28:52.995116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.995534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-11-27 07:28:52.995564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.995902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-11-27 07:28:52.995933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.996310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-11-27 07:28:52.996341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.996685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-11-27 07:28:52.996715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.997132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-11-27 07:28:52.997174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.997494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-11-27 07:28:52.997525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.997896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-11-27 07:28:52.997927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.998174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-11-27 07:28:52.998206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-11-27 07:28:52.998565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-11-27 07:28:52.998594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-11-27 07:28:52.998855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-11-27 07:28:52.998889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-11-27 07:28:52.999142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-11-27 07:28:52.999188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-11-27 07:28:52.999659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-11-27 07:28:52.999689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-11-27 07:28:53.000025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-11-27 07:28:53.000056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-11-27 07:28:53.000313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-11-27 07:28:53.000345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-11-27 07:28:53.000681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-11-27 07:28:53.000710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-11-27 07:28:53.001058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-11-27 07:28:53.001089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-11-27 07:28:53.001524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-11-27 07:28:53.001557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-11-27 07:28:53.001795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-11-27 07:28:53.001823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-11-27 07:28:53.002048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-11-27 07:28:53.002077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-11-27 07:28:53.002349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-11-27 07:28:53.002381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-11-27 07:28:53.002762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-11-27 07:28:53.002797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-11-27 07:28:53.003153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-11-27 07:28:53.003198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-11-27 07:28:53.003438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-11-27 07:28:53.003467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-11-27 07:28:53.003854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-11-27 07:28:53.003884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-11-27 07:28:53.004122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-11-27 07:28:53.004155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-11-27 07:28:53.004350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-11-27 07:28:53.004382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-11-27 07:28:53.004710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-11-27 07:28:53.004742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-11-27 07:28:53.004838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-11-27 07:28:53.004866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-11-27 07:28:53.005108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-11-27 07:28:53.005137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-11-27 07:28:53.005535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-11-27 07:28:53.005566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-11-27 07:28:53.005921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-11-27 07:28:53.005953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-11-27 07:28:53.006323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-11-27 07:28:53.006355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-11-27 07:28:53.006620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-11-27 07:28:53.006649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-11-27 07:28:53.007011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-11-27 07:28:53.007039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-11-27 07:28:53.007401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-11-27 07:28:53.007433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-11-27 07:28:53.007825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-11-27 07:28:53.007856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-11-27 07:28:53.008219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-11-27 07:28:53.008249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-11-27 07:28:53.008610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-11-27 07:28:53.008640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-11-27 07:28:53.008905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-11-27 07:28:53.008936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-11-27 07:28:53.009307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-11-27 07:28:53.009338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-11-27 07:28:53.009589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-11-27 07:28:53.009621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-11-27 07:28:53.009980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-11-27 07:28:53.010009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-11-27 07:28:53.010213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-11-27 07:28:53.010244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-11-27 07:28:53.010467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-11-27 07:28:53.010496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-11-27 07:28:53.010834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-11-27 07:28:53.010863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-11-27 07:28:53.011223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-11-27 07:28:53.011253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-11-27 07:28:53.011644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-11-27 07:28:53.011682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-11-27 07:28:53.011929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-11-27 07:28:53.011958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-11-27 07:28:53.012189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-11-27 07:28:53.012221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-11-27 07:28:53.012561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-11-27 07:28:53.012591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-11-27 07:28:53.012819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-11-27 07:28:53.012849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-11-27 07:28:53.013225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-11-27 07:28:53.013257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-11-27 07:28:53.013490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-11-27 07:28:53.013520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-11-27 07:28:53.013892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-11-27 07:28:53.013924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-11-27 07:28:53.014276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-11-27 07:28:53.014306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-11-27 07:28:53.014693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-11-27 07:28:53.014724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-11-27 07:28:53.014979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-11-27 07:28:53.015013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-11-27 07:28:53.015346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-11-27 07:28:53.015376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-11-27 07:28:53.015750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-11-27 07:28:53.015780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-11-27 07:28:53.016145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-11-27 07:28:53.016204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-11-27 07:28:53.016446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-11-27 07:28:53.016475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-11-27 07:28:53.016824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-11-27 07:28:53.016855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-11-27 07:28:53.017084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-11-27 07:28:53.017115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-11-27 07:28:53.017486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-11-27 07:28:53.017519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-11-27 07:28:53.017881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-11-27 07:28:53.017911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-11-27 07:28:53.018278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-11-27 07:28:53.018309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-11-27 07:28:53.018539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-11-27 07:28:53.018568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-11-27 07:28:53.018817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-11-27 07:28:53.018847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-11-27 07:28:53.019095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-11-27 07:28:53.019128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-11-27 07:28:53.019388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-11-27 07:28:53.019417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-11-27 07:28:53.019690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-11-27 07:28:53.019720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-11-27 07:28:53.020062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-11-27 07:28:53.020093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-11-27 07:28:53.020514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-11-27 07:28:53.020547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-11-27 07:28:53.020886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-11-27 07:28:53.020916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-11-27 07:28:53.021267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-11-27 07:28:53.021297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-11-27 07:28:53.021656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-11-27 07:28:53.021686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-11-27 07:28:53.021947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-11-27 07:28:53.021977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-11-27 07:28:53.022368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-11-27 07:28:53.022400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-11-27 07:28:53.022695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-11-27 07:28:53.022726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-11-27 07:28:53.023139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-11-27 07:28:53.023181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-11-27 07:28:53.023418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-11-27 07:28:53.023450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-11-27 07:28:53.023694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-11-27 07:28:53.023727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.904 [2024-11-27 07:28:53.024109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-11-27 07:28:53.024138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-11-27 07:28:53.024514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-11-27 07:28:53.024545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-11-27 07:28:53.024903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-11-27 07:28:53.024932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-11-27 07:28:53.025301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-11-27 07:28:53.025333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-11-27 07:28:53.025688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-11-27 07:28:53.025717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-11-27 07:28:53.025957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-11-27 07:28:53.025985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-11-27 07:28:53.026196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-11-27 07:28:53.026235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-11-27 07:28:53.026645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-11-27 07:28:53.026678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-11-27 07:28:53.027048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-11-27 07:28:53.027080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-11-27 07:28:53.027393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-11-27 07:28:53.027426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-11-27 07:28:53.027767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-11-27 07:28:53.027797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-11-27 07:28:53.028176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-11-27 07:28:53.028208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-11-27 07:28:53.028574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-11-27 07:28:53.028603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-11-27 07:28:53.028951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-11-27 07:28:53.028979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-11-27 07:28:53.029340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-11-27 07:28:53.029370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-11-27 07:28:53.029630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-11-27 07:28:53.029661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-11-27 07:28:53.030007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-11-27 07:28:53.030037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-11-27 07:28:53.030402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-11-27 07:28:53.030432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-11-27 07:28:53.030809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-11-27 07:28:53.030841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-11-27 07:28:53.031199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-11-27 07:28:53.031231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-11-27 07:28:53.031603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-11-27 07:28:53.031635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-11-27 07:28:53.031866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-11-27 07:28:53.031894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-11-27 07:28:53.032244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-11-27 07:28:53.032276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-11-27 07:28:53.032617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-11-27 07:28:53.032647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-11-27 07:28:53.033013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-11-27 07:28:53.033042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-11-27 07:28:53.033258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-11-27 07:28:53.033288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-11-27 07:28:53.033614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-11-27 07:28:53.033644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-11-27 07:28:53.034048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-11-27 07:28:53.034077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-11-27 07:28:53.034442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-11-27 07:28:53.034472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-11-27 07:28:53.034836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-11-27 07:28:53.034867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-11-27 07:28:53.035220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-11-27 07:28:53.035252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-11-27 07:28:53.035683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-11-27 07:28:53.035712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-11-27 07:28:53.036057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-11-27 07:28:53.036088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-11-27 07:28:53.036456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-11-27 07:28:53.036495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-11-27 07:28:53.036846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-11-27 07:28:53.036877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-11-27 07:28:53.037182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-11-27 07:28:53.037212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-11-27 07:28:53.037439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-11-27 07:28:53.037471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-11-27 07:28:53.037835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-11-27 07:28:53.037864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-11-27 07:28:53.038242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-11-27 07:28:53.038274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-11-27 07:28:53.038689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-11-27 07:28:53.038718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-11-27 07:28:53.038984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-11-27 07:28:53.039019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-11-27 07:28:53.039318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-11-27 07:28:53.039349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-11-27 07:28:53.039725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-11-27 07:28:53.039755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-11-27 07:28:53.040001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-11-27 07:28:53.040032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-11-27 07:28:53.040395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-11-27 07:28:53.040426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-11-27 07:28:53.040782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-11-27 07:28:53.040811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-11-27 07:28:53.041181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-11-27 07:28:53.041212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-11-27 07:28:53.041576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-11-27 07:28:53.041606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-11-27 07:28:53.041970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-11-27 07:28:53.041999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-11-27 07:28:53.042421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-11-27 07:28:53.042452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-11-27 07:28:53.042784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-11-27 07:28:53.042814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-11-27 07:28:53.043141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-11-27 07:28:53.043181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-11-27 07:28:53.043542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-11-27 07:28:53.043571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-11-27 07:28:53.043910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-11-27 07:28:53.043939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-11-27 07:28:53.044255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-11-27 07:28:53.044284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-11-27 07:28:53.044660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-11-27 07:28:53.044690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-11-27 07:28:53.044928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-11-27 07:28:53.044956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-11-27 07:28:53.045213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-11-27 07:28:53.045243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-11-27 07:28:53.045507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-11-27 07:28:53.045539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-11-27 07:28:53.045931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-11-27 07:28:53.045960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-11-27 07:28:53.046213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-11-27 07:28:53.046243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-11-27 07:28:53.046607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-11-27 07:28:53.046636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-11-27 07:28:53.046994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-11-27 07:28:53.047023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-11-27 07:28:53.047356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-11-27 07:28:53.047386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-11-27 07:28:53.047760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-11-27 07:28:53.047789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-11-27 07:28:53.048157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-11-27 07:28:53.048199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-11-27 07:28:53.048559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-11-27 07:28:53.048588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-11-27 07:28:53.048807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-11-27 07:28:53.048836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-11-27 07:28:53.048942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-11-27 07:28:53.048971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-11-27 07:28:53.049303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-11-27 07:28:53.049333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-11-27 07:28:53.049708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-11-27 07:28:53.049736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-11-27 07:28:53.049946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-11-27 07:28:53.049974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-11-27 07:28:53.050332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-11-27 07:28:53.050362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-11-27 07:28:53.050730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-11-27 07:28:53.050758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-11-27 07:28:53.051014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-11-27 07:28:53.051046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-11-27 07:28:53.051411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-11-27 07:28:53.051442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-11-27 07:28:53.051791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-11-27 07:28:53.051821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-11-27 07:28:53.052083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-11-27 07:28:53.052111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-11-27 07:28:53.052501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-11-27 07:28:53.052532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-11-27 07:28:53.052868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-11-27 07:28:53.052897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-11-27 07:28:53.053269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-11-27 07:28:53.053299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-11-27 07:28:53.053666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-11-27 07:28:53.053695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-11-27 07:28:53.054042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-11-27 07:28:53.054071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-11-27 07:28:53.054352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-11-27 07:28:53.054383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-11-27 07:28:53.054649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-11-27 07:28:53.054678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-11-27 07:28:53.055036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-11-27 07:28:53.055065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-11-27 07:28:53.055430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-11-27 07:28:53.055460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-11-27 07:28:53.055825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-11-27 07:28:53.055855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-11-27 07:28:53.056113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-11-27 07:28:53.056142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-11-27 07:28:53.056553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-11-27 07:28:53.056582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-11-27 07:28:53.056937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-11-27 07:28:53.056967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-11-27 07:28:53.057336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-11-27 07:28:53.057366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-11-27 07:28:53.057726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-11-27 07:28:53.057758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-11-27 07:28:53.058133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-11-27 07:28:53.058173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-11-27 07:28:53.058535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-11-27 07:28:53.058565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-11-27 07:28:53.058795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-11-27 07:28:53.058823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-11-27 07:28:53.059196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-11-27 07:28:53.059227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-11-27 07:28:53.059558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-11-27 07:28:53.059589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-11-27 07:28:53.059948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-11-27 07:28:53.059978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-11-27 07:28:53.060361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-11-27 07:28:53.060391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-11-27 07:28:53.060734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-11-27 07:28:53.060769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-11-27 07:28:53.061128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-11-27 07:28:53.061173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-11-27 07:28:53.061410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-11-27 07:28:53.061441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-11-27 07:28:53.061790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-11-27 07:28:53.061819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-11-27 07:28:53.062178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-11-27 07:28:53.062207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-11-27 07:28:53.062584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-11-27 07:28:53.062614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-11-27 07:28:53.062983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-11-27 07:28:53.063013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-11-27 07:28:53.063260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-11-27 07:28:53.063290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.907 [2024-11-27 07:28:53.063539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-11-27 07:28:53.063569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-11-27 07:28:53.063940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-11-27 07:28:53.063973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-11-27 07:28:53.064313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-11-27 07:28:53.064343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-11-27 07:28:53.064710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-11-27 07:28:53.064739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-11-27 07:28:53.065111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-11-27 07:28:53.065140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-11-27 07:28:53.065525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-11-27 07:28:53.065556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-11-27 07:28:53.065932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-11-27 07:28:53.065960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-11-27 07:28:53.066238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-11-27 07:28:53.066267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-11-27 07:28:53.066653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-11-27 07:28:53.066682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-11-27 07:28:53.067042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-11-27 07:28:53.067071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-11-27 07:28:53.067480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-11-27 07:28:53.067511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-11-27 07:28:53.067870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-11-27 07:28:53.067899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-11-27 07:28:53.068226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-11-27 07:28:53.068255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-11-27 07:28:53.068516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-11-27 07:28:53.068544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-11-27 07:28:53.068926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-11-27 07:28:53.068955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-11-27 07:28:53.069321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-11-27 07:28:53.069351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-11-27 07:28:53.069563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-11-27 07:28:53.069590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-11-27 07:28:53.069948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-11-27 07:28:53.069978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-11-27 07:28:53.070203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-11-27 07:28:53.070233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-11-27 07:28:53.070483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-11-27 07:28:53.070511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-11-27 07:28:53.070885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-11-27 07:28:53.070920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-11-27 07:28:53.071193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-11-27 07:28:53.071223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-11-27 07:28:53.071371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-11-27 07:28:53.071399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-11-27 07:28:53.071751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-11-27 07:28:53.071780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-11-27 07:28:53.072113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-11-27 07:28:53.072144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-11-27 07:28:53.072387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-11-27 07:28:53.072418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-11-27 07:28:53.072798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-11-27 07:28:53.072827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-11-27 07:28:53.073199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-11-27 07:28:53.073228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-11-27 07:28:53.073606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-11-27 07:28:53.073636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-11-27 07:28:53.074022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-11-27 07:28:53.074050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-11-27 07:28:53.074423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-11-27 07:28:53.074453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-11-27 07:28:53.074762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-11-27 07:28:53.074791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-11-27 07:28:53.075021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-11-27 07:28:53.075053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-11-27 07:28:53.075281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-11-27 07:28:53.075310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-11-27 07:28:53.075608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-11-27 07:28:53.075638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-11-27 07:28:53.076019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-11-27 07:28:53.076048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-11-27 07:28:53.076258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-11-27 07:28:53.076288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-11-27 07:28:53.076648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-11-27 07:28:53.076680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:42.181 [2024-11-27 07:28:53.077051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.181 [2024-11-27 07:28:53.077083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.181 qpair failed and we were unable to recover it. 00:33:42.181 [2024-11-27 07:28:53.077311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.181 [2024-11-27 07:28:53.077340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.181 qpair failed and we were unable to recover it. 00:33:42.181 [2024-11-27 07:28:53.077707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.181 [2024-11-27 07:28:53.077738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.181 qpair failed and we were unable to recover it. 00:33:42.181 [2024-11-27 07:28:53.077960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.181 [2024-11-27 07:28:53.077992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.181 qpair failed and we were unable to recover it. 00:33:42.181 [2024-11-27 07:28:53.078281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.181 [2024-11-27 07:28:53.078313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.181 qpair failed and we were unable to recover it. 00:33:42.181 [2024-11-27 07:28:53.078677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.181 [2024-11-27 07:28:53.078709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.181 qpair failed and we were unable to recover it. 00:33:42.181 [2024-11-27 07:28:53.079064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.181 [2024-11-27 07:28:53.079093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.181 qpair failed and we were unable to recover it. 00:33:42.181 [2024-11-27 07:28:53.079465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.181 [2024-11-27 07:28:53.079497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.181 qpair failed and we were unable to recover it. 00:33:42.181 [2024-11-27 07:28:53.079850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.181 [2024-11-27 07:28:53.079881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.181 qpair failed and we were unable to recover it. 00:33:42.181 [2024-11-27 07:28:53.080245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.181 [2024-11-27 07:28:53.080284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.181 qpair failed and we were unable to recover it. 00:33:42.181 [2024-11-27 07:28:53.080613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.181 [2024-11-27 07:28:53.080643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.181 qpair failed and we were unable to recover it. 00:33:42.181 [2024-11-27 07:28:53.080755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.181 [2024-11-27 07:28:53.080786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.181 qpair failed and we were unable to recover it. 00:33:42.181 [2024-11-27 07:28:53.081169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.181 [2024-11-27 07:28:53.081203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.181 qpair failed and we were unable to recover it. 00:33:42.181 [2024-11-27 07:28:53.081566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.181 [2024-11-27 07:28:53.081594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.181 qpair failed and we were unable to recover it. 00:33:42.181 [2024-11-27 07:28:53.081954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.181 [2024-11-27 07:28:53.081983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.181 qpair failed and we were unable to recover it. 00:33:42.181 [2024-11-27 07:28:53.082218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.181 [2024-11-27 07:28:53.082248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.181 qpair failed and we were unable to recover it. 00:33:42.181 [2024-11-27 07:28:53.082518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.181 [2024-11-27 07:28:53.082547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.181 qpair failed and we were unable to recover it. 00:33:42.181 [2024-11-27 07:28:53.082915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.181 [2024-11-27 07:28:53.082946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.181 qpair failed and we were unable to recover it. 00:33:42.181 [2024-11-27 07:28:53.083174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.181 [2024-11-27 07:28:53.083205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.181 qpair failed and we were unable to recover it. 00:33:42.181 [2024-11-27 07:28:53.083522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.181 [2024-11-27 07:28:53.083550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.181 qpair failed and we were unable to recover it. 00:33:42.181 [2024-11-27 07:28:53.083933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.181 [2024-11-27 07:28:53.083962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.181 qpair failed and we were unable to recover it. 00:33:42.182 [2024-11-27 07:28:53.084192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.182 [2024-11-27 07:28:53.084222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.182 qpair failed and we were unable to recover it. 00:33:42.182 [2024-11-27 07:28:53.084433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.182 [2024-11-27 07:28:53.084463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.182 qpair failed and we were unable to recover it. 00:33:42.182 [2024-11-27 07:28:53.084739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.182 [2024-11-27 07:28:53.084769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.182 qpair failed and we were unable to recover it. 00:33:42.182 [2024-11-27 07:28:53.085125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.182 [2024-11-27 07:28:53.085154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.182 qpair failed and we were unable to recover it. 00:33:42.182 [2024-11-27 07:28:53.085511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.182 [2024-11-27 07:28:53.085540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.182 qpair failed and we were unable to recover it. 00:33:42.182 [2024-11-27 07:28:53.085784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.182 [2024-11-27 07:28:53.085814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.182 qpair failed and we were unable to recover it. 00:33:42.182 [2024-11-27 07:28:53.086057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.182 [2024-11-27 07:28:53.086087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.182 qpair failed and we were unable to recover it. 00:33:42.182 [2024-11-27 07:28:53.086432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.182 [2024-11-27 07:28:53.086462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.182 qpair failed and we were unable to recover it. 00:33:42.182 [2024-11-27 07:28:53.086692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.182 [2024-11-27 07:28:53.086725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.182 qpair failed and we were unable to recover it. 00:33:42.182 [2024-11-27 07:28:53.087083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.182 [2024-11-27 07:28:53.087112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.182 qpair failed and we were unable to recover it. 00:33:42.182 [2024-11-27 07:28:53.087513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.182 [2024-11-27 07:28:53.087545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.182 qpair failed and we were unable to recover it. 00:33:42.182 [2024-11-27 07:28:53.087898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.182 [2024-11-27 07:28:53.087931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.182 qpair failed and we were unable to recover it. 00:33:42.182 [2024-11-27 07:28:53.088292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.182 [2024-11-27 07:28:53.088323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.182 qpair failed and we were unable to recover it. 00:33:42.182 [2024-11-27 07:28:53.088666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.182 [2024-11-27 07:28:53.088695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.182 qpair failed and we were unable to recover it. 00:33:42.182 [2024-11-27 07:28:53.089072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.182 [2024-11-27 07:28:53.089101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.182 qpair failed and we were unable to recover it. 00:33:42.182 [2024-11-27 07:28:53.089333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.182 [2024-11-27 07:28:53.089363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.182 qpair failed and we were unable to recover it. 00:33:42.182 [2024-11-27 07:28:53.089632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.182 [2024-11-27 07:28:53.089661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.182 qpair failed and we were unable to recover it. 00:33:42.182 [2024-11-27 07:28:53.090028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.182 [2024-11-27 07:28:53.090057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.182 qpair failed and we were unable to recover it. 00:33:42.182 [2024-11-27 07:28:53.090418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.182 [2024-11-27 07:28:53.090448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.182 qpair failed and we were unable to recover it. 00:33:42.182 [2024-11-27 07:28:53.090807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.182 [2024-11-27 07:28:53.090835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.182 qpair failed and we were unable to recover it. 00:33:42.182 [2024-11-27 07:28:53.091196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.182 [2024-11-27 07:28:53.091224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.182 qpair failed and we were unable to recover it. 00:33:42.182 [2024-11-27 07:28:53.091567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.182 [2024-11-27 07:28:53.091597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.182 qpair failed and we were unable to recover it. 00:33:42.182 [2024-11-27 07:28:53.091810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.182 [2024-11-27 07:28:53.091838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.182 qpair failed and we were unable to recover it. 00:33:42.182 [2024-11-27 07:28:53.092059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.182 [2024-11-27 07:28:53.092087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.182 qpair failed and we were unable to recover it. 00:33:42.182 [2024-11-27 07:28:53.092324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.182 [2024-11-27 07:28:53.092354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.182 qpair failed and we were unable to recover it. 00:33:42.182 [2024-11-27 07:28:53.092453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.182 [2024-11-27 07:28:53.092481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.182 qpair failed and we were unable to recover it. 00:33:42.182 [2024-11-27 07:28:53.092755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1847e10 is same with the state(6) to be set 00:33:42.182 Read completed with error (sct=0, sc=8) 00:33:42.182 starting I/O failed 00:33:42.182 Read completed with error (sct=0, sc=8) 00:33:42.182 starting I/O failed 00:33:42.182 Read completed with error (sct=0, sc=8) 00:33:42.182 starting I/O failed 00:33:42.182 Read completed with error (sct=0, sc=8) 00:33:42.182 starting I/O failed 00:33:42.182 Read completed with error (sct=0, sc=8) 00:33:42.182 starting I/O failed 00:33:42.182 Read completed with error (sct=0, sc=8) 00:33:42.182 starting I/O failed 00:33:42.182 Read completed with error (sct=0, sc=8) 00:33:42.182 starting I/O failed 00:33:42.182 Read completed with error (sct=0, sc=8) 00:33:42.182 starting I/O failed 00:33:42.182 Read completed with error (sct=0, sc=8) 00:33:42.182 starting I/O failed 00:33:42.182 Read completed with error (sct=0, sc=8) 00:33:42.182 starting I/O failed 00:33:42.182 Read completed with error (sct=0, sc=8) 00:33:42.182 starting I/O failed 00:33:42.182 Read completed with error (sct=0, sc=8) 00:33:42.182 starting I/O failed 00:33:42.182 Read completed with error (sct=0, sc=8) 00:33:42.182 starting I/O failed 00:33:42.182 Write completed with error (sct=0, sc=8) 00:33:42.182 starting I/O failed 00:33:42.182 Read completed with error (sct=0, sc=8) 00:33:42.182 starting I/O failed 00:33:42.182 Write completed with error (sct=0, sc=8) 00:33:42.182 starting I/O failed 00:33:42.182 Write completed with error (sct=0, sc=8) 00:33:42.182 starting I/O failed 00:33:42.182 Write completed with error (sct=0, sc=8) 00:33:42.182 starting I/O failed 00:33:42.182 Read completed with error (sct=0, sc=8) 00:33:42.182 starting I/O failed 00:33:42.182 Read completed with error (sct=0, sc=8) 00:33:42.182 starting I/O failed 00:33:42.182 Read completed with error (sct=0, sc=8) 00:33:42.182 starting I/O failed 00:33:42.182 Write completed with error (sct=0, sc=8) 00:33:42.182 starting I/O failed 00:33:42.182 Write completed with error (sct=0, sc=8) 00:33:42.182 starting I/O failed 00:33:42.182 Write completed with error (sct=0, sc=8) 00:33:42.182 starting I/O failed 00:33:42.182 Read completed with error (sct=0, sc=8) 00:33:42.182 starting I/O failed 00:33:42.182 Read completed with error (sct=0, sc=8) 00:33:42.182 starting I/O failed 00:33:42.182 Read completed with error (sct=0, sc=8) 00:33:42.182 starting I/O failed 00:33:42.182 Read completed with error (sct=0, sc=8) 00:33:42.182 starting I/O failed 00:33:42.182 Read completed with error (sct=0, sc=8) 00:33:42.182 starting I/O failed 00:33:42.182 Write completed with error (sct=0, sc=8) 00:33:42.182 starting I/O failed 00:33:42.182 Read completed with error (sct=0, sc=8) 00:33:42.182 starting I/O failed 00:33:42.182 Read completed with error (sct=0, sc=8) 00:33:42.182 starting I/O failed 00:33:42.182 [2024-11-27 07:28:53.093828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.182 Read completed with error (sct=0, sc=8) 00:33:42.182 starting I/O failed 00:33:42.182 Read completed with error (sct=0, sc=8) 00:33:42.182 starting I/O failed 00:33:42.182 Read completed with error (sct=0, sc=8) 00:33:42.183 starting I/O failed 00:33:42.183 Read completed with error (sct=0, sc=8) 00:33:42.183 starting I/O failed 00:33:42.183 Read completed with error (sct=0, sc=8) 00:33:42.183 starting I/O failed 00:33:42.183 Read completed with error (sct=0, sc=8) 00:33:42.183 starting I/O failed 00:33:42.183 Read completed with error (sct=0, sc=8) 00:33:42.183 starting I/O failed 00:33:42.183 Read completed with error (sct=0, sc=8) 00:33:42.183 starting I/O failed 00:33:42.183 Read completed with error (sct=0, sc=8) 00:33:42.183 starting I/O failed 00:33:42.183 Read completed with error (sct=0, sc=8) 00:33:42.183 starting I/O failed 00:33:42.183 Write completed with error (sct=0, sc=8) 00:33:42.183 starting I/O failed 00:33:42.183 Write completed with error (sct=0, sc=8) 00:33:42.183 starting I/O failed 00:33:42.183 Write completed with error (sct=0, sc=8) 00:33:42.183 starting I/O failed 00:33:42.183 Read completed with error (sct=0, sc=8) 00:33:42.183 starting I/O failed 00:33:42.183 Write completed with error (sct=0, sc=8) 00:33:42.183 starting I/O failed 00:33:42.183 Write completed with error (sct=0, sc=8) 00:33:42.183 starting I/O failed 00:33:42.183 Read completed with error (sct=0, sc=8) 00:33:42.183 starting I/O failed 00:33:42.183 Read completed with error (sct=0, sc=8) 00:33:42.183 starting I/O failed 00:33:42.183 Write completed with error (sct=0, sc=8) 00:33:42.183 starting I/O failed 00:33:42.183 Read completed with error (sct=0, sc=8) 00:33:42.183 starting I/O failed 00:33:42.183 Write completed with error (sct=0, sc=8) 00:33:42.183 starting I/O failed 00:33:42.183 Read completed with error (sct=0, sc=8) 00:33:42.183 starting I/O failed 00:33:42.183 Read completed with error (sct=0, sc=8) 00:33:42.183 starting I/O failed 00:33:42.183 Read completed with error (sct=0, sc=8) 00:33:42.183 starting I/O failed 00:33:42.183 Read completed with error (sct=0, sc=8) 00:33:42.183 starting I/O failed 00:33:42.183 Write completed with error (sct=0, sc=8) 00:33:42.183 starting I/O failed 00:33:42.183 Read completed with error (sct=0, sc=8) 00:33:42.183 starting I/O failed 00:33:42.183 Read completed with error (sct=0, sc=8) 00:33:42.183 starting I/O failed 00:33:42.183 Read completed with error (sct=0, sc=8) 00:33:42.183 starting I/O failed 00:33:42.183 Write completed with error (sct=0, sc=8) 00:33:42.183 starting I/O failed 00:33:42.183 Write completed with error (sct=0, sc=8) 00:33:42.183 starting I/O failed 00:33:42.183 Read completed with error (sct=0, sc=8) 00:33:42.183 starting I/O failed 00:33:42.183 [2024-11-27 07:28:53.094561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.183 [2024-11-27 07:28:53.094899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.183 [2024-11-27 07:28:53.094930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.183 qpair failed and we were unable to recover it. 00:33:42.183 [2024-11-27 07:28:53.095148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.183 [2024-11-27 07:28:53.095188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.183 qpair failed and we were unable to recover it. 00:33:42.183 [2024-11-27 07:28:53.095590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.183 [2024-11-27 07:28:53.095627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.183 qpair failed and we were unable to recover it. 00:33:42.183 [2024-11-27 07:28:53.095984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.183 [2024-11-27 07:28:53.096013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.183 qpair failed and we were unable to recover it. 00:33:42.183 [2024-11-27 07:28:53.096350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.183 [2024-11-27 07:28:53.096379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.183 qpair failed and we were unable to recover it. 00:33:42.183 [2024-11-27 07:28:53.096611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.183 [2024-11-27 07:28:53.096640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.183 qpair failed and we were unable to recover it. 00:33:42.183 [2024-11-27 07:28:53.097032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.183 [2024-11-27 07:28:53.097060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.183 qpair failed and we were unable to recover it. 00:33:42.183 [2024-11-27 07:28:53.097414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.183 [2024-11-27 07:28:53.097445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.183 qpair failed and we were unable to recover it. 00:33:42.183 [2024-11-27 07:28:53.097671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.183 [2024-11-27 07:28:53.097700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.183 qpair failed and we were unable to recover it. 00:33:42.183 [2024-11-27 07:28:53.098096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.183 [2024-11-27 07:28:53.098125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.183 qpair failed and we were unable to recover it. 00:33:42.183 [2024-11-27 07:28:53.098393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.183 [2024-11-27 07:28:53.098424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.183 qpair failed and we were unable to recover it. 00:33:42.183 [2024-11-27 07:28:53.098743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.183 [2024-11-27 07:28:53.098772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.183 qpair failed and we were unable to recover it. 00:33:42.183 [2024-11-27 07:28:53.099105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.183 [2024-11-27 07:28:53.099135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.183 qpair failed and we were unable to recover it. 00:33:42.183 [2024-11-27 07:28:53.099497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.183 [2024-11-27 07:28:53.099527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.183 qpair failed and we were unable to recover it. 00:33:42.183 [2024-11-27 07:28:53.099911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.183 [2024-11-27 07:28:53.099940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.183 qpair failed and we were unable to recover it. 00:33:42.183 [2024-11-27 07:28:53.100212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.183 [2024-11-27 07:28:53.100242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.183 qpair failed and we were unable to recover it. 00:33:42.183 [2024-11-27 07:28:53.100605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.183 [2024-11-27 07:28:53.100634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.183 qpair failed and we were unable to recover it. 00:33:42.183 [2024-11-27 07:28:53.101013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.183 [2024-11-27 07:28:53.101042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.183 qpair failed and we were unable to recover it. 00:33:42.183 [2024-11-27 07:28:53.101399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.183 [2024-11-27 07:28:53.101429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.183 qpair failed and we were unable to recover it. 00:33:42.183 [2024-11-27 07:28:53.101660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.183 [2024-11-27 07:28:53.101688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.183 qpair failed and we were unable to recover it. 00:33:42.183 [2024-11-27 07:28:53.101812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.183 [2024-11-27 07:28:53.101840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.183 qpair failed and we were unable to recover it. 00:33:42.183 [2024-11-27 07:28:53.102129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.183 [2024-11-27 07:28:53.102169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.183 qpair failed and we were unable to recover it. 00:33:42.183 [2024-11-27 07:28:53.102533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.183 [2024-11-27 07:28:53.102562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.183 qpair failed and we were unable to recover it. 00:33:42.183 [2024-11-27 07:28:53.102893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.183 [2024-11-27 07:28:53.102923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.183 qpair failed and we were unable to recover it. 00:33:42.183 [2024-11-27 07:28:53.103302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.183 [2024-11-27 07:28:53.103331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.183 qpair failed and we were unable to recover it. 00:33:42.183 [2024-11-27 07:28:53.103703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.183 [2024-11-27 07:28:53.103731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.183 qpair failed and we were unable to recover it. 00:33:42.183 [2024-11-27 07:28:53.104071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.183 [2024-11-27 07:28:53.104101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.183 qpair failed and we were unable to recover it. 00:33:42.183 [2024-11-27 07:28:53.104457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.183 [2024-11-27 07:28:53.104487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.183 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.104849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.184 [2024-11-27 07:28:53.104878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.184 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.105282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.184 [2024-11-27 07:28:53.105319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.184 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.105525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.184 [2024-11-27 07:28:53.105553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.184 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.105937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.184 [2024-11-27 07:28:53.105966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.184 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.106325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.184 [2024-11-27 07:28:53.106355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.184 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.106709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.184 [2024-11-27 07:28:53.106738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.184 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.107125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.184 [2024-11-27 07:28:53.107154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.184 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.107423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.184 [2024-11-27 07:28:53.107451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.184 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.107735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.184 [2024-11-27 07:28:53.107767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.184 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.108216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.184 [2024-11-27 07:28:53.108246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.184 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.108601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.184 [2024-11-27 07:28:53.108631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.184 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.108997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.184 [2024-11-27 07:28:53.109026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.184 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.109390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.184 [2024-11-27 07:28:53.109421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.184 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.109785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.184 [2024-11-27 07:28:53.109814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.184 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.110195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.184 [2024-11-27 07:28:53.110227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.184 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.110506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.184 [2024-11-27 07:28:53.110534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.184 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.110891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.184 [2024-11-27 07:28:53.110920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.184 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.111281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.184 [2024-11-27 07:28:53.111311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.184 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.111685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.184 [2024-11-27 07:28:53.111714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.184 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.112072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.184 [2024-11-27 07:28:53.112100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.184 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.112477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.184 [2024-11-27 07:28:53.112508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.184 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.112879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.184 [2024-11-27 07:28:53.112908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.184 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.113268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.184 [2024-11-27 07:28:53.113300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.184 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.113667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.184 [2024-11-27 07:28:53.113696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.184 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.114069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.184 [2024-11-27 07:28:53.114097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.184 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.114470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.184 [2024-11-27 07:28:53.114501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.184 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.114861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.184 [2024-11-27 07:28:53.114889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.184 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.115274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.184 [2024-11-27 07:28:53.115304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.184 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.115668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.184 [2024-11-27 07:28:53.115710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.184 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.115938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.184 [2024-11-27 07:28:53.115966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.184 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.116342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.184 [2024-11-27 07:28:53.116371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.184 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.116706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.184 [2024-11-27 07:28:53.116735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.184 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.117103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.184 [2024-11-27 07:28:53.117131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.184 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.117513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.184 [2024-11-27 07:28:53.117544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.184 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.117654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.184 [2024-11-27 07:28:53.117686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.184 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.118075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.184 [2024-11-27 07:28:53.118105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.184 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.118376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.184 [2024-11-27 07:28:53.118406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.184 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.118744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.184 [2024-11-27 07:28:53.118774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.184 qpair failed and we were unable to recover it. 00:33:42.184 [2024-11-27 07:28:53.119032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.185 [2024-11-27 07:28:53.119061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.185 qpair failed and we were unable to recover it. 00:33:42.185 [2024-11-27 07:28:53.119418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.185 [2024-11-27 07:28:53.119448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.185 qpair failed and we were unable to recover it. 00:33:42.185 [2024-11-27 07:28:53.119750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.185 [2024-11-27 07:28:53.119780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.185 qpair failed and we were unable to recover it. 00:33:42.185 [2024-11-27 07:28:53.120135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.185 [2024-11-27 07:28:53.120176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.185 qpair failed and we were unable to recover it. 00:33:42.185 [2024-11-27 07:28:53.120481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.185 [2024-11-27 07:28:53.120510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.185 qpair failed and we were unable to recover it. 00:33:42.185 [2024-11-27 07:28:53.120849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.185 [2024-11-27 07:28:53.120877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.185 qpair failed and we were unable to recover it. 00:33:42.185 [2024-11-27 07:28:53.121264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.185 [2024-11-27 07:28:53.121295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.185 qpair failed and we were unable to recover it. 00:33:42.185 [2024-11-27 07:28:53.121662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.185 [2024-11-27 07:28:53.121691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.185 qpair failed and we were unable to recover it. 00:33:42.185 [2024-11-27 07:28:53.122051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.185 [2024-11-27 07:28:53.122079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.185 qpair failed and we were unable to recover it. 00:33:42.185 [2024-11-27 07:28:53.122294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.185 [2024-11-27 07:28:53.122324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.185 qpair failed and we were unable to recover it. 00:33:42.185 [2024-11-27 07:28:53.122707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.185 [2024-11-27 07:28:53.122736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.185 qpair failed and we were unable to recover it. 00:33:42.185 [2024-11-27 07:28:53.123104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.185 [2024-11-27 07:28:53.123133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.185 qpair failed and we were unable to recover it. 00:33:42.185 [2024-11-27 07:28:53.123388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.185 [2024-11-27 07:28:53.123417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.185 qpair failed and we were unable to recover it. 00:33:42.185 [2024-11-27 07:28:53.123778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.185 [2024-11-27 07:28:53.123806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.185 qpair failed and we were unable to recover it. 00:33:42.185 [2024-11-27 07:28:53.124143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.185 [2024-11-27 07:28:53.124184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.185 qpair failed and we were unable to recover it. 00:33:42.185 [2024-11-27 07:28:53.124453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.185 [2024-11-27 07:28:53.124481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.185 qpair failed and we were unable to recover it. 00:33:42.185 [2024-11-27 07:28:53.124827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.185 [2024-11-27 07:28:53.124855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.185 qpair failed and we were unable to recover it. 00:33:42.185 [2024-11-27 07:28:53.125218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.185 [2024-11-27 07:28:53.125248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.185 qpair failed and we were unable to recover it. 00:33:42.185 [2024-11-27 07:28:53.125611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.185 [2024-11-27 07:28:53.125640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.185 qpair failed and we were unable to recover it. 00:33:42.185 [2024-11-27 07:28:53.126002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.185 [2024-11-27 07:28:53.126031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.185 qpair failed and we were unable to recover it. 00:33:42.185 [2024-11-27 07:28:53.126394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.185 [2024-11-27 07:28:53.126424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.185 qpair failed and we were unable to recover it. 00:33:42.185 [2024-11-27 07:28:53.126787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.185 [2024-11-27 07:28:53.126815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.185 qpair failed and we were unable to recover it. 00:33:42.185 [2024-11-27 07:28:53.127154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.185 [2024-11-27 07:28:53.127195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.185 qpair failed and we were unable to recover it. 00:33:42.185 [2024-11-27 07:28:53.127541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.185 [2024-11-27 07:28:53.127570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.185 qpair failed and we were unable to recover it. 00:33:42.185 [2024-11-27 07:28:53.127937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.185 [2024-11-27 07:28:53.127965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.185 qpair failed and we were unable to recover it. 00:33:42.185 [2024-11-27 07:28:53.128224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.185 [2024-11-27 07:28:53.128257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.185 qpair failed and we were unable to recover it. 00:33:42.185 [2024-11-27 07:28:53.128638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.185 [2024-11-27 07:28:53.128667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.185 qpair failed and we were unable to recover it. 00:33:42.185 [2024-11-27 07:28:53.128878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.185 [2024-11-27 07:28:53.128906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.185 qpair failed and we were unable to recover it. 00:33:42.185 [2024-11-27 07:28:53.129136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.185 [2024-11-27 07:28:53.129182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.185 qpair failed and we were unable to recover it. 00:33:42.185 [2024-11-27 07:28:53.129539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.185 [2024-11-27 07:28:53.129569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.185 qpair failed and we were unable to recover it. 00:33:42.185 [2024-11-27 07:28:53.129836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.185 [2024-11-27 07:28:53.129865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.185 qpair failed and we were unable to recover it. 00:33:42.185 [2024-11-27 07:28:53.130274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.185 [2024-11-27 07:28:53.130305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.185 qpair failed and we were unable to recover it. 00:33:42.185 [2024-11-27 07:28:53.130657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.185 [2024-11-27 07:28:53.130687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.185 qpair failed and we were unable to recover it. 00:33:42.185 [2024-11-27 07:28:53.131053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.185 [2024-11-27 07:28:53.131081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.185 qpair failed and we were unable to recover it. 00:33:42.185 [2024-11-27 07:28:53.131457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.185 [2024-11-27 07:28:53.131487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.185 qpair failed and we were unable to recover it. 00:33:42.185 [2024-11-27 07:28:53.131867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.185 [2024-11-27 07:28:53.131897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.185 qpair failed and we were unable to recover it. 00:33:42.185 [2024-11-27 07:28:53.132146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.185 [2024-11-27 07:28:53.132186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.185 qpair failed and we were unable to recover it. 00:33:42.185 [2024-11-27 07:28:53.132425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.185 [2024-11-27 07:28:53.132454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.186 qpair failed and we were unable to recover it. 00:33:42.186 [2024-11-27 07:28:53.132819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.186 [2024-11-27 07:28:53.132847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.186 qpair failed and we were unable to recover it. 00:33:42.186 [2024-11-27 07:28:53.133208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.186 [2024-11-27 07:28:53.133238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.186 qpair failed and we were unable to recover it. 00:33:42.186 [2024-11-27 07:28:53.133617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.186 [2024-11-27 07:28:53.133647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.186 qpair failed and we were unable to recover it. 00:33:42.186 [2024-11-27 07:28:53.134010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.186 [2024-11-27 07:28:53.134038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.186 qpair failed and we were unable to recover it. 00:33:42.186 [2024-11-27 07:28:53.134390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.186 [2024-11-27 07:28:53.134420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.186 qpair failed and we were unable to recover it. 00:33:42.186 [2024-11-27 07:28:53.134788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.186 [2024-11-27 07:28:53.134817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.186 qpair failed and we were unable to recover it. 00:33:42.186 [2024-11-27 07:28:53.135192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.186 [2024-11-27 07:28:53.135222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.186 qpair failed and we were unable to recover it. 00:33:42.186 [2024-11-27 07:28:53.135600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.186 [2024-11-27 07:28:53.135630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.186 qpair failed and we were unable to recover it. 00:33:42.186 [2024-11-27 07:28:53.135864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.186 [2024-11-27 07:28:53.135895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.186 qpair failed and we were unable to recover it. 00:33:42.186 [2024-11-27 07:28:53.136262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.186 [2024-11-27 07:28:53.136293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.186 qpair failed and we were unable to recover it. 00:33:42.186 [2024-11-27 07:28:53.136523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.186 [2024-11-27 07:28:53.136551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.186 qpair failed and we were unable to recover it. 00:33:42.186 [2024-11-27 07:28:53.136803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.186 [2024-11-27 07:28:53.136832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.186 qpair failed and we were unable to recover it. 00:33:42.186 [2024-11-27 07:28:53.137051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.186 [2024-11-27 07:28:53.137085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.186 qpair failed and we were unable to recover it. 00:33:42.186 [2024-11-27 07:28:53.137398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.186 [2024-11-27 07:28:53.137428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.186 qpair failed and we were unable to recover it. 00:33:42.186 [2024-11-27 07:28:53.137784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.186 [2024-11-27 07:28:53.137812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.186 qpair failed and we were unable to recover it. 00:33:42.186 [2024-11-27 07:28:53.138171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.186 [2024-11-27 07:28:53.138202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.186 qpair failed and we were unable to recover it. 00:33:42.186 [2024-11-27 07:28:53.138410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.186 [2024-11-27 07:28:53.138439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.186 qpair failed and we were unable to recover it. 00:33:42.186 [2024-11-27 07:28:53.138847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.186 [2024-11-27 07:28:53.138875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.186 qpair failed and we were unable to recover it. 00:33:42.186 [2024-11-27 07:28:53.139240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.186 [2024-11-27 07:28:53.139271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.186 qpair failed and we were unable to recover it. 00:33:42.186 [2024-11-27 07:28:53.139595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.186 [2024-11-27 07:28:53.139623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.186 qpair failed and we were unable to recover it. 00:33:42.186 [2024-11-27 07:28:53.139989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.186 [2024-11-27 07:28:53.140025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.186 qpair failed and we were unable to recover it. 00:33:42.186 [2024-11-27 07:28:53.140256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.186 [2024-11-27 07:28:53.140286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.186 qpair failed and we were unable to recover it. 00:33:42.186 [2024-11-27 07:28:53.140664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.186 [2024-11-27 07:28:53.140693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.186 qpair failed and we were unable to recover it. 00:33:42.186 [2024-11-27 07:28:53.141080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.186 [2024-11-27 07:28:53.141108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.186 qpair failed and we were unable to recover it. 00:33:42.186 [2024-11-27 07:28:53.141496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.186 [2024-11-27 07:28:53.141527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.186 qpair failed and we were unable to recover it. 00:33:42.186 [2024-11-27 07:28:53.141908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.186 [2024-11-27 07:28:53.141936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.186 qpair failed and we were unable to recover it. 00:33:42.186 [2024-11-27 07:28:53.142309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.186 [2024-11-27 07:28:53.142340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.186 qpair failed and we were unable to recover it. 00:33:42.186 [2024-11-27 07:28:53.142562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.186 [2024-11-27 07:28:53.142593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.186 qpair failed and we were unable to recover it. 00:33:42.186 [2024-11-27 07:28:53.142958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.186 [2024-11-27 07:28:53.142987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.186 qpair failed and we were unable to recover it. 00:33:42.186 [2024-11-27 07:28:53.143344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.186 [2024-11-27 07:28:53.143375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.186 qpair failed and we were unable to recover it. 00:33:42.186 [2024-11-27 07:28:53.143742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.186 [2024-11-27 07:28:53.143770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.186 qpair failed and we were unable to recover it. 00:33:42.186 [2024-11-27 07:28:53.144141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.186 [2024-11-27 07:28:53.144178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.186 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.144401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.187 [2024-11-27 07:28:53.144429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.187 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.144811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.187 [2024-11-27 07:28:53.144839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.187 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.145149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.187 [2024-11-27 07:28:53.145195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.187 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.145470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.187 [2024-11-27 07:28:53.145498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.187 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.145862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.187 [2024-11-27 07:28:53.145890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.187 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.146264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.187 [2024-11-27 07:28:53.146294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.187 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.146560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.187 [2024-11-27 07:28:53.146588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.187 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.146952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.187 [2024-11-27 07:28:53.146980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.187 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.147352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.187 [2024-11-27 07:28:53.147382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.187 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.147710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.187 [2024-11-27 07:28:53.147738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.187 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.148122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.187 [2024-11-27 07:28:53.148151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.187 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.148308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.187 [2024-11-27 07:28:53.148338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.187 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.148697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.187 [2024-11-27 07:28:53.148725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.187 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.149088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.187 [2024-11-27 07:28:53.149117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.187 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.149497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.187 [2024-11-27 07:28:53.149526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.187 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.149890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.187 [2024-11-27 07:28:53.149926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.187 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.150291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.187 [2024-11-27 07:28:53.150321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.187 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.150519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.187 [2024-11-27 07:28:53.150547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.187 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.150783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.187 [2024-11-27 07:28:53.150812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.187 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.151183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.187 [2024-11-27 07:28:53.151212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.187 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.151494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.187 [2024-11-27 07:28:53.151523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.187 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.151824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.187 [2024-11-27 07:28:53.151854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.187 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.152217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.187 [2024-11-27 07:28:53.152248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.187 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.152599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.187 [2024-11-27 07:28:53.152628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.187 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.153037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.187 [2024-11-27 07:28:53.153067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.187 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.153403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.187 [2024-11-27 07:28:53.153433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.187 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.153792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.187 [2024-11-27 07:28:53.153820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.187 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.154178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.187 [2024-11-27 07:28:53.154208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.187 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.154540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.187 [2024-11-27 07:28:53.154568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.187 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.154946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.187 [2024-11-27 07:28:53.154976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.187 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.155353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.187 [2024-11-27 07:28:53.155384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.187 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.155750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.187 [2024-11-27 07:28:53.155778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.187 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.156143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.187 [2024-11-27 07:28:53.156201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.187 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.156415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.187 [2024-11-27 07:28:53.156443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.187 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.156666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.187 [2024-11-27 07:28:53.156695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.187 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.157047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.187 [2024-11-27 07:28:53.157077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.187 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.157463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.187 [2024-11-27 07:28:53.157493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.187 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.157874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.187 [2024-11-27 07:28:53.157903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.187 qpair failed and we were unable to recover it. 00:33:42.187 [2024-11-27 07:28:53.158148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.158189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.188 [2024-11-27 07:28:53.158441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.158469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.188 [2024-11-27 07:28:53.158883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.158912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.188 [2024-11-27 07:28:53.159133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.159171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.188 [2024-11-27 07:28:53.159345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.159373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.188 [2024-11-27 07:28:53.159730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.159759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.188 [2024-11-27 07:28:53.159919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.159947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.188 [2024-11-27 07:28:53.160327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.160357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.188 [2024-11-27 07:28:53.160730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.160759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.188 [2024-11-27 07:28:53.161001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.161028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.188 [2024-11-27 07:28:53.161233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.161264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.188 [2024-11-27 07:28:53.161652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.161680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.188 [2024-11-27 07:28:53.162052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.162080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.188 [2024-11-27 07:28:53.162184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.162215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.188 [2024-11-27 07:28:53.162534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.162562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.188 [2024-11-27 07:28:53.162934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.162962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.188 [2024-11-27 07:28:53.163316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.163346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.188 [2024-11-27 07:28:53.163711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.163739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.188 [2024-11-27 07:28:53.164128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.164157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.188 [2024-11-27 07:28:53.164536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.164566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.188 [2024-11-27 07:28:53.164937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.164966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.188 [2024-11-27 07:28:53.165188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.165219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.188 [2024-11-27 07:28:53.165582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.165611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.188 [2024-11-27 07:28:53.165844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.165876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.188 [2024-11-27 07:28:53.166125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.166153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.188 [2024-11-27 07:28:53.166506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.166535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.188 [2024-11-27 07:28:53.166778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.166806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.188 [2024-11-27 07:28:53.167060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.167088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.188 [2024-11-27 07:28:53.167307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.167336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.188 [2024-11-27 07:28:53.167571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.167599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.188 [2024-11-27 07:28:53.167948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.167977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.188 [2024-11-27 07:28:53.168339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.168368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.188 [2024-11-27 07:28:53.168733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.168762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.188 [2024-11-27 07:28:53.169122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.169152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.188 [2024-11-27 07:28:53.169531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.169561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.188 [2024-11-27 07:28:53.169829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.169857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.188 [2024-11-27 07:28:53.170205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.170235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.188 [2024-11-27 07:28:53.170513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.170542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.188 [2024-11-27 07:28:53.170645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.188 [2024-11-27 07:28:53.170673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.188 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.171028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.189 [2024-11-27 07:28:53.171057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.189 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.171409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.189 [2024-11-27 07:28:53.171438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.189 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.171752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.189 [2024-11-27 07:28:53.171780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.189 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.172112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.189 [2024-11-27 07:28:53.172141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.189 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.172523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.189 [2024-11-27 07:28:53.172552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.189 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.172910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.189 [2024-11-27 07:28:53.172937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.189 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.173638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.189 [2024-11-27 07:28:53.173677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.189 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.174042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.189 [2024-11-27 07:28:53.174073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.189 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.174409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.189 [2024-11-27 07:28:53.174439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.189 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.174798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.189 [2024-11-27 07:28:53.174827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.189 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.175057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.189 [2024-11-27 07:28:53.175086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.189 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.175439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.189 [2024-11-27 07:28:53.175470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.189 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.175692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.189 [2024-11-27 07:28:53.175725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.189 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.176067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.189 [2024-11-27 07:28:53.176097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.189 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.176462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.189 [2024-11-27 07:28:53.176492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.189 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.176859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.189 [2024-11-27 07:28:53.176888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.189 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.177239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.189 [2024-11-27 07:28:53.177272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.189 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.177645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.189 [2024-11-27 07:28:53.177674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.189 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.177895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.189 [2024-11-27 07:28:53.177923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.189 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.178284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.189 [2024-11-27 07:28:53.178314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.189 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.178680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.189 [2024-11-27 07:28:53.178709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.189 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.178954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.189 [2024-11-27 07:28:53.178985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.189 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.179336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.189 [2024-11-27 07:28:53.179366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.189 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.179727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.189 [2024-11-27 07:28:53.179755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.189 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.180117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.189 [2024-11-27 07:28:53.180145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.189 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.180508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.189 [2024-11-27 07:28:53.180538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.189 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.180765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.189 [2024-11-27 07:28:53.180793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.189 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.180950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.189 [2024-11-27 07:28:53.180980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.189 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.181340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.189 [2024-11-27 07:28:53.181371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.189 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.181740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.189 [2024-11-27 07:28:53.181769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.189 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.181997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.189 [2024-11-27 07:28:53.182026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.189 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.182411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.189 [2024-11-27 07:28:53.182441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.189 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.182811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.189 [2024-11-27 07:28:53.182839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.189 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.183096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.189 [2024-11-27 07:28:53.183131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.189 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.183519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.189 [2024-11-27 07:28:53.183548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.189 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.183781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.189 [2024-11-27 07:28:53.183809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.189 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.184190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.189 [2024-11-27 07:28:53.184220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.189 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.184597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.189 [2024-11-27 07:28:53.184626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.189 qpair failed and we were unable to recover it. 00:33:42.189 [2024-11-27 07:28:53.184985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.190 [2024-11-27 07:28:53.185012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.190 qpair failed and we were unable to recover it. 00:33:42.190 [2024-11-27 07:28:53.185407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.190 [2024-11-27 07:28:53.185438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.190 qpair failed and we were unable to recover it. 00:33:42.190 [2024-11-27 07:28:53.185659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.190 [2024-11-27 07:28:53.185687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.190 qpair failed and we were unable to recover it. 00:33:42.190 [2024-11-27 07:28:53.186012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.190 [2024-11-27 07:28:53.186042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.190 qpair failed and we were unable to recover it. 00:33:42.190 [2024-11-27 07:28:53.186410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.190 [2024-11-27 07:28:53.186440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.190 qpair failed and we were unable to recover it. 00:33:42.190 [2024-11-27 07:28:53.186814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.190 [2024-11-27 07:28:53.186843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.190 qpair failed and we were unable to recover it. 00:33:42.190 [2024-11-27 07:28:53.187204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.190 [2024-11-27 07:28:53.187235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.190 qpair failed and we were unable to recover it. 00:33:42.190 [2024-11-27 07:28:53.187484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.190 [2024-11-27 07:28:53.187513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.190 qpair failed and we were unable to recover it. 00:33:42.190 [2024-11-27 07:28:53.187871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.190 [2024-11-27 07:28:53.187900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.190 qpair failed and we were unable to recover it. 00:33:42.190 [2024-11-27 07:28:53.188282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.190 [2024-11-27 07:28:53.188312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.190 qpair failed and we were unable to recover it. 00:33:42.190 [2024-11-27 07:28:53.188653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.190 [2024-11-27 07:28:53.188682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.190 qpair failed and we were unable to recover it. 00:33:42.190 [2024-11-27 07:28:53.189057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.190 [2024-11-27 07:28:53.189085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.190 qpair failed and we were unable to recover it. 00:33:42.190 [2024-11-27 07:28:53.189432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.190 [2024-11-27 07:28:53.189463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.190 qpair failed and we were unable to recover it. 00:33:42.190 [2024-11-27 07:28:53.189813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.190 [2024-11-27 07:28:53.189841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.190 qpair failed and we were unable to recover it. 00:33:42.190 [2024-11-27 07:28:53.190203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.190 [2024-11-27 07:28:53.190234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.190 qpair failed and we were unable to recover it. 00:33:42.190 [2024-11-27 07:28:53.190616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.190 [2024-11-27 07:28:53.190644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.190 qpair failed and we were unable to recover it. 00:33:42.190 [2024-11-27 07:28:53.191014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.190 [2024-11-27 07:28:53.191042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.190 qpair failed and we were unable to recover it. 00:33:42.190 [2024-11-27 07:28:53.191387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.190 [2024-11-27 07:28:53.191416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.190 qpair failed and we were unable to recover it. 00:33:42.190 [2024-11-27 07:28:53.191789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.190 [2024-11-27 07:28:53.191819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.190 qpair failed and we were unable to recover it. 00:33:42.190 [2024-11-27 07:28:53.192199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.190 [2024-11-27 07:28:53.192228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.190 qpair failed and we were unable to recover it. 00:33:42.190 [2024-11-27 07:28:53.192593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.190 [2024-11-27 07:28:53.192623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.190 qpair failed and we were unable to recover it. 00:33:42.190 [2024-11-27 07:28:53.192980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.190 [2024-11-27 07:28:53.193009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.190 qpair failed and we were unable to recover it. 00:33:42.190 [2024-11-27 07:28:53.193267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.190 [2024-11-27 07:28:53.193305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.190 qpair failed and we were unable to recover it. 00:33:42.190 [2024-11-27 07:28:53.193680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.190 [2024-11-27 07:28:53.193710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.190 qpair failed and we were unable to recover it. 00:33:42.190 [2024-11-27 07:28:53.194076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.190 [2024-11-27 07:28:53.194105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.190 qpair failed and we were unable to recover it. 00:33:42.190 [2024-11-27 07:28:53.194447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.190 [2024-11-27 07:28:53.194477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.190 qpair failed and we were unable to recover it. 00:33:42.190 [2024-11-27 07:28:53.194872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.190 [2024-11-27 07:28:53.194902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.190 qpair failed and we were unable to recover it. 00:33:42.190 [2024-11-27 07:28:53.195145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.190 [2024-11-27 07:28:53.195190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.190 qpair failed and we were unable to recover it. 00:33:42.190 [2024-11-27 07:28:53.195539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.190 [2024-11-27 07:28:53.195568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.190 qpair failed and we were unable to recover it. 00:33:42.190 [2024-11-27 07:28:53.195986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.190 [2024-11-27 07:28:53.196014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.190 qpair failed and we were unable to recover it. 00:33:42.190 [2024-11-27 07:28:53.196385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.190 [2024-11-27 07:28:53.196415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.190 qpair failed and we were unable to recover it. 00:33:42.190 [2024-11-27 07:28:53.196630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.190 [2024-11-27 07:28:53.196658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.190 qpair failed and we were unable to recover it. 00:33:42.190 [2024-11-27 07:28:53.197038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.190 [2024-11-27 07:28:53.197067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.190 qpair failed and we were unable to recover it. 00:33:42.190 [2024-11-27 07:28:53.197421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.190 [2024-11-27 07:28:53.197450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.190 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.197809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.191 [2024-11-27 07:28:53.197838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.191 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.198204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.191 [2024-11-27 07:28:53.198233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.191 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.198638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.191 [2024-11-27 07:28:53.198668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.191 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.199030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.191 [2024-11-27 07:28:53.199060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.191 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.199411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.191 [2024-11-27 07:28:53.199440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.191 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.199803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.191 [2024-11-27 07:28:53.199833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.191 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.200207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.191 [2024-11-27 07:28:53.200237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.191 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.200590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.191 [2024-11-27 07:28:53.200618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.191 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.200992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.191 [2024-11-27 07:28:53.201021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.191 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.201121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.191 [2024-11-27 07:28:53.201148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.191 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.201560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.191 [2024-11-27 07:28:53.201588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.191 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.201936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.191 [2024-11-27 07:28:53.201966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.191 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.202326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.191 [2024-11-27 07:28:53.202356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.191 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.202717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.191 [2024-11-27 07:28:53.202745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.191 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.203117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.191 [2024-11-27 07:28:53.203145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.191 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.203393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.191 [2024-11-27 07:28:53.203422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.191 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.203786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.191 [2024-11-27 07:28:53.203815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.191 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.204194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.191 [2024-11-27 07:28:53.204224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.191 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.204453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.191 [2024-11-27 07:28:53.204482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.191 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.204864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.191 [2024-11-27 07:28:53.204894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.191 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.205108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.191 [2024-11-27 07:28:53.205138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.191 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.205539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.191 [2024-11-27 07:28:53.205569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.191 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.205931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.191 [2024-11-27 07:28:53.205961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.191 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.206350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.191 [2024-11-27 07:28:53.206380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.191 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.206727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.191 [2024-11-27 07:28:53.206755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.191 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.207128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.191 [2024-11-27 07:28:53.207175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.191 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.207498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.191 [2024-11-27 07:28:53.207527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.191 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.207886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.191 [2024-11-27 07:28:53.207914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.191 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.208275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.191 [2024-11-27 07:28:53.208305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.191 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.208574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.191 [2024-11-27 07:28:53.208607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.191 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.208974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.191 [2024-11-27 07:28:53.209003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.191 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.209360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.191 [2024-11-27 07:28:53.209391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.191 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.209766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.191 [2024-11-27 07:28:53.209795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.191 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.210188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.191 [2024-11-27 07:28:53.210218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.191 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.210560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.191 [2024-11-27 07:28:53.210590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.191 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.210948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.191 [2024-11-27 07:28:53.210977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.191 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.211344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.191 [2024-11-27 07:28:53.211376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.191 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.211731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.191 [2024-11-27 07:28:53.211759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.191 qpair failed and we were unable to recover it. 00:33:42.191 [2024-11-27 07:28:53.211983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.212012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.192 qpair failed and we were unable to recover it. 00:33:42.192 [2024-11-27 07:28:53.212365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.212396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.192 qpair failed and we were unable to recover it. 00:33:42.192 [2024-11-27 07:28:53.212617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.212645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.192 qpair failed and we were unable to recover it. 00:33:42.192 [2024-11-27 07:28:53.213005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.213033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.192 qpair failed and we were unable to recover it. 00:33:42.192 [2024-11-27 07:28:53.213386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.213416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.192 qpair failed and we were unable to recover it. 00:33:42.192 [2024-11-27 07:28:53.213782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.213811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.192 qpair failed and we were unable to recover it. 00:33:42.192 [2024-11-27 07:28:53.214178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.214208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.192 qpair failed and we were unable to recover it. 00:33:42.192 [2024-11-27 07:28:53.214577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.214607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.192 qpair failed and we were unable to recover it. 00:33:42.192 [2024-11-27 07:28:53.214980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.215009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.192 qpair failed and we were unable to recover it. 00:33:42.192 [2024-11-27 07:28:53.215366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.215395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.192 qpair failed and we were unable to recover it. 00:33:42.192 [2024-11-27 07:28:53.215762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.215791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.192 qpair failed and we were unable to recover it. 00:33:42.192 [2024-11-27 07:28:53.216180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.216211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.192 qpair failed and we were unable to recover it. 00:33:42.192 [2024-11-27 07:28:53.216512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.216540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.192 qpair failed and we were unable to recover it. 00:33:42.192 [2024-11-27 07:28:53.216946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.216974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.192 qpair failed and we were unable to recover it. 00:33:42.192 [2024-11-27 07:28:53.217324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.217355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.192 qpair failed and we were unable to recover it. 00:33:42.192 [2024-11-27 07:28:53.217749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.217777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.192 qpair failed and we were unable to recover it. 00:33:42.192 [2024-11-27 07:28:53.218196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.218227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.192 qpair failed and we were unable to recover it. 00:33:42.192 [2024-11-27 07:28:53.218553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.218584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.192 qpair failed and we were unable to recover it. 00:33:42.192 [2024-11-27 07:28:53.218922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.218956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.192 qpair failed and we were unable to recover it. 00:33:42.192 [2024-11-27 07:28:53.219257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.219286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.192 qpair failed and we were unable to recover it. 00:33:42.192 [2024-11-27 07:28:53.219623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.219653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.192 qpair failed and we were unable to recover it. 00:33:42.192 [2024-11-27 07:28:53.220016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.220045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.192 qpair failed and we were unable to recover it. 00:33:42.192 [2024-11-27 07:28:53.220442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.220473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.192 qpair failed and we were unable to recover it. 00:33:42.192 [2024-11-27 07:28:53.220824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.220852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.192 qpair failed and we were unable to recover it. 00:33:42.192 [2024-11-27 07:28:53.221224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.221254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.192 qpair failed and we were unable to recover it. 00:33:42.192 [2024-11-27 07:28:53.221492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.221524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.192 qpair failed and we were unable to recover it. 00:33:42.192 [2024-11-27 07:28:53.221771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.221800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.192 qpair failed and we were unable to recover it. 00:33:42.192 [2024-11-27 07:28:53.222176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.222206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.192 qpair failed and we were unable to recover it. 00:33:42.192 [2024-11-27 07:28:53.222587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.222616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.192 qpair failed and we were unable to recover it. 00:33:42.192 [2024-11-27 07:28:53.222971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.223000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.192 qpair failed and we were unable to recover it. 00:33:42.192 [2024-11-27 07:28:53.223251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.223280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.192 qpair failed and we were unable to recover it. 00:33:42.192 [2024-11-27 07:28:53.223527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.223555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.192 qpair failed and we were unable to recover it. 00:33:42.192 [2024-11-27 07:28:53.223944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.223972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.192 qpair failed and we were unable to recover it. 00:33:42.192 [2024-11-27 07:28:53.224339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.224370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.192 qpair failed and we were unable to recover it. 00:33:42.192 [2024-11-27 07:28:53.224735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.224765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.192 qpair failed and we were unable to recover it. 00:33:42.192 [2024-11-27 07:28:53.225172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.225202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.192 qpair failed and we were unable to recover it. 00:33:42.192 [2024-11-27 07:28:53.225610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.225640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.192 qpair failed and we were unable to recover it. 00:33:42.192 [2024-11-27 07:28:53.225854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.225883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.192 qpair failed and we were unable to recover it. 00:33:42.192 [2024-11-27 07:28:53.226265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.192 [2024-11-27 07:28:53.226297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.226647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.193 [2024-11-27 07:28:53.226679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.226924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.193 [2024-11-27 07:28:53.226955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.227296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.193 [2024-11-27 07:28:53.227326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.227699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.193 [2024-11-27 07:28:53.227728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.227964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.193 [2024-11-27 07:28:53.227992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.228400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.193 [2024-11-27 07:28:53.228431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.228733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.193 [2024-11-27 07:28:53.228767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.229134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.193 [2024-11-27 07:28:53.229175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.229544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.193 [2024-11-27 07:28:53.229574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.229947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.193 [2024-11-27 07:28:53.229976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.230335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.193 [2024-11-27 07:28:53.230366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.230730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.193 [2024-11-27 07:28:53.230761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.231122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.193 [2024-11-27 07:28:53.231152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.231538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.193 [2024-11-27 07:28:53.231568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.231924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.193 [2024-11-27 07:28:53.231954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.232321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.193 [2024-11-27 07:28:53.232352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.232723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.193 [2024-11-27 07:28:53.232753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.233098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.193 [2024-11-27 07:28:53.233129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.233461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.193 [2024-11-27 07:28:53.233491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.233851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.193 [2024-11-27 07:28:53.233879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.234238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.193 [2024-11-27 07:28:53.234271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.234533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.193 [2024-11-27 07:28:53.234561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.234907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.193 [2024-11-27 07:28:53.234936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.235316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.193 [2024-11-27 07:28:53.235346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.235710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.193 [2024-11-27 07:28:53.235739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.235973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.193 [2024-11-27 07:28:53.236003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.236378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.193 [2024-11-27 07:28:53.236410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.236666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.193 [2024-11-27 07:28:53.236695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.236926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.193 [2024-11-27 07:28:53.236955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.237323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.193 [2024-11-27 07:28:53.237353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.237723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.193 [2024-11-27 07:28:53.237754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.238103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.193 [2024-11-27 07:28:53.238133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.238406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.193 [2024-11-27 07:28:53.238436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.238789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.193 [2024-11-27 07:28:53.238817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.239185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.193 [2024-11-27 07:28:53.239216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.239547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.193 [2024-11-27 07:28:53.239577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.239786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.193 [2024-11-27 07:28:53.239814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.240180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.193 [2024-11-27 07:28:53.240210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.193 qpair failed and we were unable to recover it. 00:33:42.193 [2024-11-27 07:28:53.240539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.240571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.194 qpair failed and we were unable to recover it. 00:33:42.194 [2024-11-27 07:28:53.240944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.240973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.194 qpair failed and we were unable to recover it. 00:33:42.194 [2024-11-27 07:28:53.241341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.241373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.194 qpair failed and we were unable to recover it. 00:33:42.194 [2024-11-27 07:28:53.241634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.241667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.194 qpair failed and we were unable to recover it. 00:33:42.194 [2024-11-27 07:28:53.242010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.242041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.194 qpair failed and we were unable to recover it. 00:33:42.194 [2024-11-27 07:28:53.242388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.242420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.194 qpair failed and we were unable to recover it. 00:33:42.194 [2024-11-27 07:28:53.242776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.242805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.194 qpair failed and we were unable to recover it. 00:33:42.194 [2024-11-27 07:28:53.243195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.243227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.194 qpair failed and we were unable to recover it. 00:33:42.194 [2024-11-27 07:28:53.243593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.243623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.194 qpair failed and we were unable to recover it. 00:33:42.194 [2024-11-27 07:28:53.243985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.244016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.194 qpair failed and we were unable to recover it. 00:33:42.194 [2024-11-27 07:28:53.244391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.244422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.194 qpair failed and we were unable to recover it. 00:33:42.194 [2024-11-27 07:28:53.244780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.244812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.194 qpair failed and we were unable to recover it. 00:33:42.194 [2024-11-27 07:28:53.245180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.245211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.194 qpair failed and we were unable to recover it. 00:33:42.194 [2024-11-27 07:28:53.245544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.245573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.194 qpair failed and we were unable to recover it. 00:33:42.194 [2024-11-27 07:28:53.245818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.245850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.194 qpair failed and we were unable to recover it. 00:33:42.194 [2024-11-27 07:28:53.246215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.246248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.194 qpair failed and we were unable to recover it. 00:33:42.194 [2024-11-27 07:28:53.246634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.246665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.194 qpair failed and we were unable to recover it. 00:33:42.194 [2024-11-27 07:28:53.246877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.246906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.194 qpair failed and we were unable to recover it. 00:33:42.194 [2024-11-27 07:28:53.247185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.247217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.194 qpair failed and we were unable to recover it. 00:33:42.194 [2024-11-27 07:28:53.247592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.247623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.194 qpair failed and we were unable to recover it. 00:33:42.194 [2024-11-27 07:28:53.247984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.248013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.194 qpair failed and we were unable to recover it. 00:33:42.194 [2024-11-27 07:28:53.248398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.248437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.194 qpair failed and we were unable to recover it. 00:33:42.194 [2024-11-27 07:28:53.248789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.248821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.194 qpair failed and we were unable to recover it. 00:33:42.194 [2024-11-27 07:28:53.249056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.249084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.194 qpair failed and we were unable to recover it. 00:33:42.194 [2024-11-27 07:28:53.249494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.249524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.194 qpair failed and we were unable to recover it. 00:33:42.194 [2024-11-27 07:28:53.249874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.249904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.194 qpair failed and we were unable to recover it. 00:33:42.194 [2024-11-27 07:28:53.250261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.250293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.194 qpair failed and we were unable to recover it. 00:33:42.194 [2024-11-27 07:28:53.250671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.250702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.194 qpair failed and we were unable to recover it. 00:33:42.194 [2024-11-27 07:28:53.250932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.250962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.194 qpair failed and we were unable to recover it. 00:33:42.194 [2024-11-27 07:28:53.251379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.251410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.194 qpair failed and we were unable to recover it. 00:33:42.194 [2024-11-27 07:28:53.251756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.251786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.194 qpair failed and we were unable to recover it. 00:33:42.194 [2024-11-27 07:28:53.252124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.252155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.194 qpair failed and we were unable to recover it. 00:33:42.194 [2024-11-27 07:28:53.252419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.252449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.194 qpair failed and we were unable to recover it. 00:33:42.194 [2024-11-27 07:28:53.252708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.252740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.194 qpair failed and we were unable to recover it. 00:33:42.194 [2024-11-27 07:28:53.253108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.253137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.194 qpair failed and we were unable to recover it. 00:33:42.194 [2024-11-27 07:28:53.253570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.253600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.194 qpair failed and we were unable to recover it. 00:33:42.194 [2024-11-27 07:28:53.253798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.253833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.194 qpair failed and we were unable to recover it. 00:33:42.194 [2024-11-27 07:28:53.254234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.254267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.194 qpair failed and we were unable to recover it. 00:33:42.194 [2024-11-27 07:28:53.254639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.194 [2024-11-27 07:28:53.254669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.195 qpair failed and we were unable to recover it. 00:33:42.195 [2024-11-27 07:28:53.255042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.195 [2024-11-27 07:28:53.255072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.195 qpair failed and we were unable to recover it. 00:33:42.195 [2024-11-27 07:28:53.255511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.195 [2024-11-27 07:28:53.255544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.195 qpair failed and we were unable to recover it. 00:33:42.195 [2024-11-27 07:28:53.255749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.195 [2024-11-27 07:28:53.255780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.195 qpair failed and we were unable to recover it. 00:33:42.195 [2024-11-27 07:28:53.256132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.195 [2024-11-27 07:28:53.256188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.195 qpair failed and we were unable to recover it. 00:33:42.195 [2024-11-27 07:28:53.256512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.195 [2024-11-27 07:28:53.256542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.195 qpair failed and we were unable to recover it. 00:33:42.195 [2024-11-27 07:28:53.256995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.195 [2024-11-27 07:28:53.257024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.195 qpair failed and we were unable to recover it. 00:33:42.195 [2024-11-27 07:28:53.257360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.195 [2024-11-27 07:28:53.257392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.195 qpair failed and we were unable to recover it. 00:33:42.195 [2024-11-27 07:28:53.257774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.195 [2024-11-27 07:28:53.257805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.195 qpair failed and we were unable to recover it. 00:33:42.195 [2024-11-27 07:28:53.258182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.195 [2024-11-27 07:28:53.258214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.195 qpair failed and we were unable to recover it. 00:33:42.195 [2024-11-27 07:28:53.258584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.195 [2024-11-27 07:28:53.258615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.195 qpair failed and we were unable to recover it. 00:33:42.195 [2024-11-27 07:28:53.258972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.195 [2024-11-27 07:28:53.259003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.195 qpair failed and we were unable to recover it. 00:33:42.195 [2024-11-27 07:28:53.259366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.195 [2024-11-27 07:28:53.259400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.195 qpair failed and we were unable to recover it. 00:33:42.195 [2024-11-27 07:28:53.259745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.195 [2024-11-27 07:28:53.259775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.195 qpair failed and we were unable to recover it. 00:33:42.195 [2024-11-27 07:28:53.260151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.195 [2024-11-27 07:28:53.260194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.195 qpair failed and we were unable to recover it. 00:33:42.195 [2024-11-27 07:28:53.260538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.195 [2024-11-27 07:28:53.260571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.195 qpair failed and we were unable to recover it. 00:33:42.195 [2024-11-27 07:28:53.260950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.195 [2024-11-27 07:28:53.260979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.195 qpair failed and we were unable to recover it. 00:33:42.195 [2024-11-27 07:28:53.261209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.195 [2024-11-27 07:28:53.261240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.195 qpair failed and we were unable to recover it. 00:33:42.195 [2024-11-27 07:28:53.261594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.195 [2024-11-27 07:28:53.261624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.195 qpair failed and we were unable to recover it. 00:33:42.195 [2024-11-27 07:28:53.261980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.195 [2024-11-27 07:28:53.262009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.195 qpair failed and we were unable to recover it. 00:33:42.195 [2024-11-27 07:28:53.262225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.195 [2024-11-27 07:28:53.262256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.195 qpair failed and we were unable to recover it. 00:33:42.195 [2024-11-27 07:28:53.262619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.195 [2024-11-27 07:28:53.262650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.195 qpair failed and we were unable to recover it. 00:33:42.195 [2024-11-27 07:28:53.263014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.195 [2024-11-27 07:28:53.263044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.195 qpair failed and we were unable to recover it. 00:33:42.195 [2024-11-27 07:28:53.263393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.195 [2024-11-27 07:28:53.263425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.195 qpair failed and we were unable to recover it. 00:33:42.195 [2024-11-27 07:28:53.263787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.195 [2024-11-27 07:28:53.263818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.195 qpair failed and we were unable to recover it. 00:33:42.195 [2024-11-27 07:28:53.264153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.195 [2024-11-27 07:28:53.264211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.195 qpair failed and we were unable to recover it. 00:33:42.195 [2024-11-27 07:28:53.264544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.195 [2024-11-27 07:28:53.264576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.195 qpair failed and we were unable to recover it. 00:33:42.195 [2024-11-27 07:28:53.264933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.195 [2024-11-27 07:28:53.264962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.195 qpair failed and we were unable to recover it. 00:33:42.195 [2024-11-27 07:28:53.265280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.195 [2024-11-27 07:28:53.265310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.195 qpair failed and we were unable to recover it. 00:33:42.195 [2024-11-27 07:28:53.265679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.195 [2024-11-27 07:28:53.265709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.195 qpair failed and we were unable to recover it. 00:33:42.195 [2024-11-27 07:28:53.266085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.195 [2024-11-27 07:28:53.266116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.195 qpair failed and we were unable to recover it. 00:33:42.195 [2024-11-27 07:28:53.266479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.195 [2024-11-27 07:28:53.266512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.195 qpair failed and we were unable to recover it. 00:33:42.195 [2024-11-27 07:28:53.266746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.195 [2024-11-27 07:28:53.266778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.195 qpair failed and we were unable to recover it. 00:33:42.195 [2024-11-27 07:28:53.267137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.195 [2024-11-27 07:28:53.267186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.195 qpair failed and we were unable to recover it. 00:33:42.195 [2024-11-27 07:28:53.267544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.267572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.196 [2024-11-27 07:28:53.267924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.267952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.196 [2024-11-27 07:28:53.268321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.268354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.196 [2024-11-27 07:28:53.268719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.268747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.196 [2024-11-27 07:28:53.269110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.269143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.196 [2024-11-27 07:28:53.269541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.269574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.196 [2024-11-27 07:28:53.269938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.269970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.196 [2024-11-27 07:28:53.270330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.270360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.196 [2024-11-27 07:28:53.270601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.270630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.196 [2024-11-27 07:28:53.270982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.271012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.196 [2024-11-27 07:28:53.271396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.271428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.196 [2024-11-27 07:28:53.271571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.271604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.196 [2024-11-27 07:28:53.271872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.271906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.196 [2024-11-27 07:28:53.272176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.272208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.196 [2024-11-27 07:28:53.272577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.272610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.196 [2024-11-27 07:28:53.272987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.273017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.196 [2024-11-27 07:28:53.273388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.273419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.196 [2024-11-27 07:28:53.273781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.273814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.196 [2024-11-27 07:28:53.274183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.274226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.196 [2024-11-27 07:28:53.274557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.274588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.196 [2024-11-27 07:28:53.274823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.274852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.196 [2024-11-27 07:28:53.275214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.275246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.196 [2024-11-27 07:28:53.275488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.275516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.196 [2024-11-27 07:28:53.275730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.275759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.196 [2024-11-27 07:28:53.276107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.276139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.196 [2024-11-27 07:28:53.276558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.276587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.196 [2024-11-27 07:28:53.276826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.276855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.196 [2024-11-27 07:28:53.277227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.277261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.196 [2024-11-27 07:28:53.277609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.277639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.196 [2024-11-27 07:28:53.277967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.277998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.196 [2024-11-27 07:28:53.278223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.278256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.196 [2024-11-27 07:28:53.278640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.278669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.196 [2024-11-27 07:28:53.279033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.279067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.196 [2024-11-27 07:28:53.279440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.279474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.196 [2024-11-27 07:28:53.279824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.279856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.196 [2024-11-27 07:28:53.280227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.280258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.196 [2024-11-27 07:28:53.280619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.280647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.196 [2024-11-27 07:28:53.281013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.281043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.196 [2024-11-27 07:28:53.281423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.196 [2024-11-27 07:28:53.281454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.196 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.281817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.197 [2024-11-27 07:28:53.281850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.197 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.282227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.197 [2024-11-27 07:28:53.282261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.197 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.282695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.197 [2024-11-27 07:28:53.282726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.197 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.283078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.197 [2024-11-27 07:28:53.283106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.197 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.283452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.197 [2024-11-27 07:28:53.283484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.197 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.283849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.197 [2024-11-27 07:28:53.283883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.197 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.284225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.197 [2024-11-27 07:28:53.284254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.197 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.284485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.197 [2024-11-27 07:28:53.284514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.197 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.284875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.197 [2024-11-27 07:28:53.284904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.197 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.285126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.197 [2024-11-27 07:28:53.285157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.197 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.285573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.197 [2024-11-27 07:28:53.285602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.197 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.285979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.197 [2024-11-27 07:28:53.286008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.197 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.286391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.197 [2024-11-27 07:28:53.286422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.197 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.286755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.197 [2024-11-27 07:28:53.286789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.197 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.287038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.197 [2024-11-27 07:28:53.287068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.197 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.287403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.197 [2024-11-27 07:28:53.287434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.197 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.287651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.197 [2024-11-27 07:28:53.287680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.197 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.288037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.197 [2024-11-27 07:28:53.288066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.197 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.288409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.197 [2024-11-27 07:28:53.288438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.197 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.288812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.197 [2024-11-27 07:28:53.288841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.197 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.289191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.197 [2024-11-27 07:28:53.289223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.197 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.289448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.197 [2024-11-27 07:28:53.289477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.197 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.289819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.197 [2024-11-27 07:28:53.289849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.197 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.290216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.197 [2024-11-27 07:28:53.290246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.197 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.290592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.197 [2024-11-27 07:28:53.290622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.197 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.290999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.197 [2024-11-27 07:28:53.291027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.197 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.291381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.197 [2024-11-27 07:28:53.291410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.197 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.291765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.197 [2024-11-27 07:28:53.291793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.197 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.292172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.197 [2024-11-27 07:28:53.292202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.197 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.292559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.197 [2024-11-27 07:28:53.292587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.197 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.292841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.197 [2024-11-27 07:28:53.292871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.197 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.293097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.197 [2024-11-27 07:28:53.293126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.197 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.293336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.197 [2024-11-27 07:28:53.293365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.197 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.293733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.197 [2024-11-27 07:28:53.293762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.197 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.293867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.197 [2024-11-27 07:28:53.293903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.197 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.294256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.197 [2024-11-27 07:28:53.294286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.197 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.294616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.197 [2024-11-27 07:28:53.294644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.197 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.294880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.197 [2024-11-27 07:28:53.294908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.197 qpair failed and we were unable to recover it. 00:33:42.197 [2024-11-27 07:28:53.295296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.295325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.198 qpair failed and we were unable to recover it. 00:33:42.198 [2024-11-27 07:28:53.295698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.295726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.198 qpair failed and we were unable to recover it. 00:33:42.198 [2024-11-27 07:28:53.296109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.296139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.198 qpair failed and we were unable to recover it. 00:33:42.198 [2024-11-27 07:28:53.296379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.296408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.198 qpair failed and we were unable to recover it. 00:33:42.198 [2024-11-27 07:28:53.296617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.296645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.198 qpair failed and we were unable to recover it. 00:33:42.198 [2024-11-27 07:28:53.297015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.297044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.198 qpair failed and we were unable to recover it. 00:33:42.198 [2024-11-27 07:28:53.297391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.297422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.198 qpair failed and we were unable to recover it. 00:33:42.198 [2024-11-27 07:28:53.297667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.297695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.198 qpair failed and we were unable to recover it. 00:33:42.198 [2024-11-27 07:28:53.298050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.298080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.198 qpair failed and we were unable to recover it. 00:33:42.198 [2024-11-27 07:28:53.298441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.298479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.198 qpair failed and we were unable to recover it. 00:33:42.198 [2024-11-27 07:28:53.298734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.298765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.198 qpair failed and we were unable to recover it. 00:33:42.198 [2024-11-27 07:28:53.298982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.299010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.198 qpair failed and we were unable to recover it. 00:33:42.198 [2024-11-27 07:28:53.299364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.299395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.198 qpair failed and we were unable to recover it. 00:33:42.198 [2024-11-27 07:28:53.299767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.299795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.198 qpair failed and we were unable to recover it. 00:33:42.198 [2024-11-27 07:28:53.300154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.300195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.198 qpair failed and we were unable to recover it. 00:33:42.198 [2024-11-27 07:28:53.300374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.300402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.198 qpair failed and we were unable to recover it. 00:33:42.198 [2024-11-27 07:28:53.300815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.300843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.198 qpair failed and we were unable to recover it. 00:33:42.198 [2024-11-27 07:28:53.301199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.301229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.198 qpair failed and we were unable to recover it. 00:33:42.198 [2024-11-27 07:28:53.301578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.301606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.198 qpair failed and we were unable to recover it. 00:33:42.198 [2024-11-27 07:28:53.301701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.301736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.198 qpair failed and we were unable to recover it. 00:33:42.198 [2024-11-27 07:28:53.302054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.302082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.198 qpair failed and we were unable to recover it. 00:33:42.198 [2024-11-27 07:28:53.302454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.302484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.198 qpair failed and we were unable to recover it. 00:33:42.198 [2024-11-27 07:28:53.302850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.302878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.198 qpair failed and we were unable to recover it. 00:33:42.198 [2024-11-27 07:28:53.303156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.303198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.198 qpair failed and we were unable to recover it. 00:33:42.198 [2024-11-27 07:28:53.303573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.303601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.198 qpair failed and we were unable to recover it. 00:33:42.198 [2024-11-27 07:28:53.303966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.303995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.198 qpair failed and we were unable to recover it. 00:33:42.198 [2024-11-27 07:28:53.304355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.304384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.198 qpair failed and we were unable to recover it. 00:33:42.198 [2024-11-27 07:28:53.304744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.304772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.198 qpair failed and we were unable to recover it. 00:33:42.198 [2024-11-27 07:28:53.305144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.305184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.198 qpair failed and we were unable to recover it. 00:33:42.198 [2024-11-27 07:28:53.305392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.305420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.198 qpair failed and we were unable to recover it. 00:33:42.198 [2024-11-27 07:28:53.305763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.305792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.198 qpair failed and we were unable to recover it. 00:33:42.198 [2024-11-27 07:28:53.306157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.306199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.198 qpair failed and we were unable to recover it. 00:33:42.198 [2024-11-27 07:28:53.306429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.306458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.198 qpair failed and we were unable to recover it. 00:33:42.198 [2024-11-27 07:28:53.306817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.306845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.198 qpair failed and we were unable to recover it. 00:33:42.198 [2024-11-27 07:28:53.307213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.307243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.198 qpair failed and we were unable to recover it. 00:33:42.198 [2024-11-27 07:28:53.307538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.307567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.198 qpair failed and we were unable to recover it. 00:33:42.198 [2024-11-27 07:28:53.307992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.308026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.198 qpair failed and we were unable to recover it. 00:33:42.198 [2024-11-27 07:28:53.308281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.308311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.198 qpair failed and we were unable to recover it. 00:33:42.198 [2024-11-27 07:28:53.308683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.198 [2024-11-27 07:28:53.308711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.199 qpair failed and we were unable to recover it. 00:33:42.199 [2024-11-27 07:28:53.309077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.199 [2024-11-27 07:28:53.309105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.199 qpair failed and we were unable to recover it. 00:33:42.199 [2024-11-27 07:28:53.309463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.199 [2024-11-27 07:28:53.309494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.199 qpair failed and we were unable to recover it. 00:33:42.199 [2024-11-27 07:28:53.309716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.199 [2024-11-27 07:28:53.309745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.199 qpair failed and we were unable to recover it. 00:33:42.199 [2024-11-27 07:28:53.309983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.199 [2024-11-27 07:28:53.310011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.199 qpair failed and we were unable to recover it. 00:33:42.199 [2024-11-27 07:28:53.310279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.199 [2024-11-27 07:28:53.310309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.199 qpair failed and we were unable to recover it. 00:33:42.199 [2024-11-27 07:28:53.310579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.199 [2024-11-27 07:28:53.310607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.199 qpair failed and we were unable to recover it. 00:33:42.199 [2024-11-27 07:28:53.310960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.199 [2024-11-27 07:28:53.310988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.199 qpair failed and we were unable to recover it. 00:33:42.199 [2024-11-27 07:28:53.311369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.199 [2024-11-27 07:28:53.311399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.199 qpair failed and we were unable to recover it. 00:33:42.199 [2024-11-27 07:28:53.311778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.199 [2024-11-27 07:28:53.311808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.199 qpair failed and we were unable to recover it. 00:33:42.199 [2024-11-27 07:28:53.312179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.199 [2024-11-27 07:28:53.312208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.199 qpair failed and we were unable to recover it. 00:33:42.199 [2024-11-27 07:28:53.312569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.199 [2024-11-27 07:28:53.312597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.199 qpair failed and we were unable to recover it. 00:33:42.199 [2024-11-27 07:28:53.312959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.199 [2024-11-27 07:28:53.312989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.199 qpair failed and we were unable to recover it. 00:33:42.199 [2024-11-27 07:28:53.313359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.199 [2024-11-27 07:28:53.313389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.199 qpair failed and we were unable to recover it. 00:33:42.199 [2024-11-27 07:28:53.313749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.199 [2024-11-27 07:28:53.313778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.199 qpair failed and we were unable to recover it. 00:33:42.199 [2024-11-27 07:28:53.314007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.199 [2024-11-27 07:28:53.314034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.199 qpair failed and we were unable to recover it. 00:33:42.199 [2024-11-27 07:28:53.314408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.199 [2024-11-27 07:28:53.314437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.199 qpair failed and we were unable to recover it. 00:33:42.199 [2024-11-27 07:28:53.314796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.199 [2024-11-27 07:28:53.314825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.199 qpair failed and we were unable to recover it. 00:33:42.199 [2024-11-27 07:28:53.315187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.199 [2024-11-27 07:28:53.315216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.199 qpair failed and we were unable to recover it. 00:33:42.199 [2024-11-27 07:28:53.315604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.199 [2024-11-27 07:28:53.315633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.199 qpair failed and we were unable to recover it. 00:33:42.199 [2024-11-27 07:28:53.315990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.199 [2024-11-27 07:28:53.316020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.199 qpair failed and we were unable to recover it. 00:33:42.199 [2024-11-27 07:28:53.316403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.199 [2024-11-27 07:28:53.316434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.199 qpair failed and we were unable to recover it. 00:33:42.199 [2024-11-27 07:28:53.316809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.199 [2024-11-27 07:28:53.316837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.199 qpair failed and we were unable to recover it. 00:33:42.199 [2024-11-27 07:28:53.317065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.199 [2024-11-27 07:28:53.317092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.199 qpair failed and we were unable to recover it. 00:33:42.199 [2024-11-27 07:28:53.317429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.199 [2024-11-27 07:28:53.317459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.199 qpair failed and we were unable to recover it. 00:33:42.199 [2024-11-27 07:28:53.317816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.199 [2024-11-27 07:28:53.317845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.199 qpair failed and we were unable to recover it. 00:33:42.199 [2024-11-27 07:28:53.318209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.199 [2024-11-27 07:28:53.318240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.199 qpair failed and we were unable to recover it. 00:33:42.199 [2024-11-27 07:28:53.318521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.199 [2024-11-27 07:28:53.318549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.199 qpair failed and we were unable to recover it. 00:33:42.199 [2024-11-27 07:28:53.318912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.199 [2024-11-27 07:28:53.318941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.199 qpair failed and we were unable to recover it. 00:33:42.199 [2024-11-27 07:28:53.319308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.199 [2024-11-27 07:28:53.319338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.199 qpair failed and we were unable to recover it. 00:33:42.199 [2024-11-27 07:28:53.319560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.199 [2024-11-27 07:28:53.319589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.199 qpair failed and we were unable to recover it. 00:33:42.199 [2024-11-27 07:28:53.319933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.199 [2024-11-27 07:28:53.319962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.199 qpair failed and we were unable to recover it. 00:33:42.199 [2024-11-27 07:28:53.320323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.199 [2024-11-27 07:28:53.320352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.199 qpair failed and we were unable to recover it. 00:33:42.199 [2024-11-27 07:28:53.320702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.199 [2024-11-27 07:28:53.320731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.199 qpair failed and we were unable to recover it. 00:33:42.199 [2024-11-27 07:28:53.321092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.199 [2024-11-27 07:28:53.321121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.321354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.200 [2024-11-27 07:28:53.321387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.321762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.200 [2024-11-27 07:28:53.321790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.322070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.200 [2024-11-27 07:28:53.322098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.322450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.200 [2024-11-27 07:28:53.322481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.322850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.200 [2024-11-27 07:28:53.322880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.323109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.200 [2024-11-27 07:28:53.323137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.323537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.200 [2024-11-27 07:28:53.323567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.323933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.200 [2024-11-27 07:28:53.323962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.324306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.200 [2024-11-27 07:28:53.324337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.324702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.200 [2024-11-27 07:28:53.324731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.324945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.200 [2024-11-27 07:28:53.324973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.325189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.200 [2024-11-27 07:28:53.325218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.325576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.200 [2024-11-27 07:28:53.325607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.325963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.200 [2024-11-27 07:28:53.325992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.326445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.200 [2024-11-27 07:28:53.326476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.326713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.200 [2024-11-27 07:28:53.326744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.327051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.200 [2024-11-27 07:28:53.327080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.327456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.200 [2024-11-27 07:28:53.327486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.327868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.200 [2024-11-27 07:28:53.327897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.328255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.200 [2024-11-27 07:28:53.328285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.328661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.200 [2024-11-27 07:28:53.328690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.329049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.200 [2024-11-27 07:28:53.329077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.329510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.200 [2024-11-27 07:28:53.329539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.329808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.200 [2024-11-27 07:28:53.329836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.330214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.200 [2024-11-27 07:28:53.330245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.330560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.200 [2024-11-27 07:28:53.330589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.330939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.200 [2024-11-27 07:28:53.330967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.331326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.200 [2024-11-27 07:28:53.331356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.331709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.200 [2024-11-27 07:28:53.331737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.332132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.200 [2024-11-27 07:28:53.332172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.332565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.200 [2024-11-27 07:28:53.332594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.332803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.200 [2024-11-27 07:28:53.332837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.333215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.200 [2024-11-27 07:28:53.333245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.333480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.200 [2024-11-27 07:28:53.333509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.333719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.200 [2024-11-27 07:28:53.333747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.333970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.200 [2024-11-27 07:28:53.333997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.334330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.200 [2024-11-27 07:28:53.334359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.334730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.200 [2024-11-27 07:28:53.334759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.200 qpair failed and we were unable to recover it. 00:33:42.200 [2024-11-27 07:28:53.334982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.201 [2024-11-27 07:28:53.335011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.201 qpair failed and we were unable to recover it. 00:33:42.201 [2024-11-27 07:28:53.335364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.201 [2024-11-27 07:28:53.335394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.201 qpair failed and we were unable to recover it. 00:33:42.201 [2024-11-27 07:28:53.335628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.201 [2024-11-27 07:28:53.335657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.201 qpair failed and we were unable to recover it. 00:33:42.201 [2024-11-27 07:28:53.335953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.201 [2024-11-27 07:28:53.335981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.201 qpair failed and we were unable to recover it. 00:33:42.201 [2024-11-27 07:28:53.336201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.201 [2024-11-27 07:28:53.336231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.201 qpair failed and we were unable to recover it. 00:33:42.201 [2024-11-27 07:28:53.336448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.201 [2024-11-27 07:28:53.336476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.201 qpair failed and we were unable to recover it. 00:33:42.201 [2024-11-27 07:28:53.336745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.201 [2024-11-27 07:28:53.336774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.201 qpair failed and we were unable to recover it. 00:33:42.201 [2024-11-27 07:28:53.337023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.201 [2024-11-27 07:28:53.337052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.201 qpair failed and we were unable to recover it. 00:33:42.201 [2024-11-27 07:28:53.337446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.201 [2024-11-27 07:28:53.337475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.201 qpair failed and we were unable to recover it. 00:33:42.201 [2024-11-27 07:28:53.337820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.201 [2024-11-27 07:28:53.337848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.201 qpair failed and we were unable to recover it. 00:33:42.201 [2024-11-27 07:28:53.338201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.201 [2024-11-27 07:28:53.338231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.201 qpair failed and we were unable to recover it. 00:33:42.201 [2024-11-27 07:28:53.338566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.201 [2024-11-27 07:28:53.338595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.201 qpair failed and we were unable to recover it. 00:33:42.201 [2024-11-27 07:28:53.338958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.201 [2024-11-27 07:28:53.338986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.201 qpair failed and we were unable to recover it. 00:33:42.201 [2024-11-27 07:28:53.339321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.201 [2024-11-27 07:28:53.339351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.201 qpair failed and we were unable to recover it. 00:33:42.201 [2024-11-27 07:28:53.339583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.201 [2024-11-27 07:28:53.339611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.201 qpair failed and we were unable to recover it. 00:33:42.201 [2024-11-27 07:28:53.339961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.201 [2024-11-27 07:28:53.339989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.201 qpair failed and we were unable to recover it. 00:33:42.201 [2024-11-27 07:28:53.340351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.201 [2024-11-27 07:28:53.340382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.201 qpair failed and we were unable to recover it. 00:33:42.201 [2024-11-27 07:28:53.340643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.201 [2024-11-27 07:28:53.340671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.201 qpair failed and we were unable to recover it. 00:33:42.201 [2024-11-27 07:28:53.341024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.201 [2024-11-27 07:28:53.341053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.201 qpair failed and we were unable to recover it. 00:33:42.201 [2024-11-27 07:28:53.341448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.201 [2024-11-27 07:28:53.341478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.201 qpair failed and we were unable to recover it. 00:33:42.201 [2024-11-27 07:28:53.341833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.201 [2024-11-27 07:28:53.341867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.201 qpair failed and we were unable to recover it. 00:33:42.201 [2024-11-27 07:28:53.342237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.201 [2024-11-27 07:28:53.342267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.201 qpair failed and we were unable to recover it. 00:33:42.201 [2024-11-27 07:28:53.342521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.201 [2024-11-27 07:28:53.342549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.201 qpair failed and we were unable to recover it. 00:33:42.201 [2024-11-27 07:28:53.342892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.201 [2024-11-27 07:28:53.342920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.201 qpair failed and we were unable to recover it. 00:33:42.201 [2024-11-27 07:28:53.343294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.201 [2024-11-27 07:28:53.343324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.201 qpair failed and we were unable to recover it. 00:33:42.201 [2024-11-27 07:28:53.343685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.201 [2024-11-27 07:28:53.343714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.201 qpair failed and we were unable to recover it. 00:33:42.201 [2024-11-27 07:28:53.344028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.201 [2024-11-27 07:28:53.344058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.201 qpair failed and we were unable to recover it. 00:33:42.201 [2024-11-27 07:28:53.344424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.201 [2024-11-27 07:28:53.344454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.201 qpair failed and we were unable to recover it. 00:33:42.201 [2024-11-27 07:28:53.344689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.201 [2024-11-27 07:28:53.344717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.201 qpair failed and we were unable to recover it. 00:33:42.201 [2024-11-27 07:28:53.344985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.201 [2024-11-27 07:28:53.345012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.201 qpair failed and we were unable to recover it. 00:33:42.201 [2024-11-27 07:28:53.345390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.201 [2024-11-27 07:28:53.345420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.201 qpair failed and we were unable to recover it. 00:33:42.201 [2024-11-27 07:28:53.345769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.201 [2024-11-27 07:28:53.345798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.201 qpair failed and we were unable to recover it. 00:33:42.201 [2024-11-27 07:28:53.346204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.201 [2024-11-27 07:28:53.346233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.201 qpair failed and we were unable to recover it. 00:33:42.201 [2024-11-27 07:28:53.346470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.201 [2024-11-27 07:28:53.346498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.201 qpair failed and we were unable to recover it. 00:33:42.201 [2024-11-27 07:28:53.346630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.201 [2024-11-27 07:28:53.346658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.201 qpair failed and we were unable to recover it. 00:33:42.201 [2024-11-27 07:28:53.347004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.201 [2024-11-27 07:28:53.347033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.201 qpair failed and we were unable to recover it. 00:33:42.201 [2024-11-27 07:28:53.347383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.201 [2024-11-27 07:28:53.347413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.201 qpair failed and we were unable to recover it. 00:33:42.201 [2024-11-27 07:28:53.347743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.201 [2024-11-27 07:28:53.347772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.201 qpair failed and we were unable to recover it. 00:33:42.201 [2024-11-27 07:28:53.348115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.348144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.202 [2024-11-27 07:28:53.348536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.348564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.202 [2024-11-27 07:28:53.348906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.348936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.202 [2024-11-27 07:28:53.349209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.349240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.202 [2024-11-27 07:28:53.349633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.349661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.202 [2024-11-27 07:28:53.350030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.350060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.202 [2024-11-27 07:28:53.350433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.350462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.202 [2024-11-27 07:28:53.350831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.350859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.202 [2024-11-27 07:28:53.351220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.351249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.202 [2024-11-27 07:28:53.351578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.351614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.202 [2024-11-27 07:28:53.351953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.351982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.202 [2024-11-27 07:28:53.352343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.352373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.202 [2024-11-27 07:28:53.352600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.352628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.202 [2024-11-27 07:28:53.352994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.353023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.202 [2024-11-27 07:28:53.353382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.353413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.202 [2024-11-27 07:28:53.353622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.353651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.202 [2024-11-27 07:28:53.354026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.354053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.202 [2024-11-27 07:28:53.354291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.354323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.202 [2024-11-27 07:28:53.354620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.354648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.202 [2024-11-27 07:28:53.354857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.354885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.202 [2024-11-27 07:28:53.355238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.355268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.202 [2024-11-27 07:28:53.355656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.355685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.202 [2024-11-27 07:28:53.356037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.356066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.202 [2024-11-27 07:28:53.356453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.356483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.202 [2024-11-27 07:28:53.356855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.356885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.202 [2024-11-27 07:28:53.357129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.357157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.202 [2024-11-27 07:28:53.357408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.357437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.202 [2024-11-27 07:28:53.357810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.357839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.202 [2024-11-27 07:28:53.358201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.358232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.202 [2024-11-27 07:28:53.358617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.358646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.202 [2024-11-27 07:28:53.358878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.358905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.202 [2024-11-27 07:28:53.359273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.359303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.202 [2024-11-27 07:28:53.359667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.359696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.202 [2024-11-27 07:28:53.360061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.360089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.202 [2024-11-27 07:28:53.360462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.360491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.202 [2024-11-27 07:28:53.360870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.360899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.202 [2024-11-27 07:28:53.361142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.361185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.202 [2024-11-27 07:28:53.361556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.361585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.202 [2024-11-27 07:28:53.361947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.202 [2024-11-27 07:28:53.361975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.202 qpair failed and we were unable to recover it. 00:33:42.203 [2024-11-27 07:28:53.362198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.203 [2024-11-27 07:28:53.362228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.203 qpair failed and we were unable to recover it. 00:33:42.203 [2024-11-27 07:28:53.362573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.203 [2024-11-27 07:28:53.362602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.203 qpair failed and we were unable to recover it. 00:33:42.203 [2024-11-27 07:28:53.362831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.203 [2024-11-27 07:28:53.362859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.203 qpair failed and we were unable to recover it. 00:33:42.203 [2024-11-27 07:28:53.363141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.203 [2024-11-27 07:28:53.363180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.203 qpair failed and we were unable to recover it. 00:33:42.203 [2024-11-27 07:28:53.363440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.203 [2024-11-27 07:28:53.363468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.203 qpair failed and we were unable to recover it. 00:33:42.203 [2024-11-27 07:28:53.363863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.203 [2024-11-27 07:28:53.363891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.203 qpair failed and we were unable to recover it. 00:33:42.203 [2024-11-27 07:28:53.364147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.203 [2024-11-27 07:28:53.364186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.203 qpair failed and we were unable to recover it. 00:33:42.203 [2024-11-27 07:28:53.364400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.203 [2024-11-27 07:28:53.364428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.203 qpair failed and we were unable to recover it. 00:33:42.203 [2024-11-27 07:28:53.364632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.203 [2024-11-27 07:28:53.364660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.203 qpair failed and we were unable to recover it. 00:33:42.203 [2024-11-27 07:28:53.365017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.203 [2024-11-27 07:28:53.365047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.203 qpair failed and we were unable to recover it. 00:33:42.203 [2024-11-27 07:28:53.365427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.203 [2024-11-27 07:28:53.365457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.203 qpair failed and we were unable to recover it. 00:33:42.203 [2024-11-27 07:28:53.365828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.203 [2024-11-27 07:28:53.365857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.203 qpair failed and we were unable to recover it. 00:33:42.203 [2024-11-27 07:28:53.366210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.203 [2024-11-27 07:28:53.366239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.203 qpair failed and we were unable to recover it. 00:33:42.203 [2024-11-27 07:28:53.366336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.203 [2024-11-27 07:28:53.366374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.203 qpair failed and we were unable to recover it. 00:33:42.203 [2024-11-27 07:28:53.366735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.203 [2024-11-27 07:28:53.366764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.203 qpair failed and we were unable to recover it. 00:33:42.203 [2024-11-27 07:28:53.367139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.203 [2024-11-27 07:28:53.367177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.203 qpair failed and we were unable to recover it. 00:33:42.203 [2024-11-27 07:28:53.367427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.203 [2024-11-27 07:28:53.367455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.203 qpair failed and we were unable to recover it. 00:33:42.203 [2024-11-27 07:28:53.367771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.203 [2024-11-27 07:28:53.367799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.203 qpair failed and we were unable to recover it. 00:33:42.203 [2024-11-27 07:28:53.368175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.203 [2024-11-27 07:28:53.368204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.203 qpair failed and we were unable to recover it. 00:33:42.203 [2024-11-27 07:28:53.368582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.203 [2024-11-27 07:28:53.368611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.203 qpair failed and we were unable to recover it. 00:33:42.203 [2024-11-27 07:28:53.368976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.203 [2024-11-27 07:28:53.369005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.203 qpair failed and we were unable to recover it. 00:33:42.203 [2024-11-27 07:28:53.369350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.203 [2024-11-27 07:28:53.369379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.203 qpair failed and we were unable to recover it. 00:33:42.203 [2024-11-27 07:28:53.369724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.203 [2024-11-27 07:28:53.369753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.203 qpair failed and we were unable to recover it. 00:33:42.203 [2024-11-27 07:28:53.370130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.203 [2024-11-27 07:28:53.370178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.203 qpair failed and we were unable to recover it. 00:33:42.203 [2024-11-27 07:28:53.370523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.203 [2024-11-27 07:28:53.370551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.203 qpair failed and we were unable to recover it. 00:33:42.203 [2024-11-27 07:28:53.370905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.203 [2024-11-27 07:28:53.370935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.203 qpair failed and we were unable to recover it. 00:33:42.203 [2024-11-27 07:28:53.371305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.203 [2024-11-27 07:28:53.371335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.203 qpair failed and we were unable to recover it. 00:33:42.203 [2024-11-27 07:28:53.371672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.203 [2024-11-27 07:28:53.371700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.203 qpair failed and we were unable to recover it. 00:33:42.477 [2024-11-27 07:28:53.372057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.477 [2024-11-27 07:28:53.372088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.477 qpair failed and we were unable to recover it. 00:33:42.477 [2024-11-27 07:28:53.372456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.477 [2024-11-27 07:28:53.372487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.477 qpair failed and we were unable to recover it. 00:33:42.477 [2024-11-27 07:28:53.372700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.477 [2024-11-27 07:28:53.372729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.477 qpair failed and we were unable to recover it. 00:33:42.477 [2024-11-27 07:28:53.373080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.477 [2024-11-27 07:28:53.373108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.477 qpair failed and we were unable to recover it. 00:33:42.477 [2024-11-27 07:28:53.373465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.477 [2024-11-27 07:28:53.373494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.477 qpair failed and we were unable to recover it. 00:33:42.477 [2024-11-27 07:28:53.373849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.477 [2024-11-27 07:28:53.373879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.477 qpair failed and we were unable to recover it. 00:33:42.477 [2024-11-27 07:28:53.374251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.477 [2024-11-27 07:28:53.374281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.477 qpair failed and we were unable to recover it. 00:33:42.477 [2024-11-27 07:28:53.374562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.477 [2024-11-27 07:28:53.374590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.477 qpair failed and we were unable to recover it. 00:33:42.477 [2024-11-27 07:28:53.374943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.477 [2024-11-27 07:28:53.374973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.477 qpair failed and we were unable to recover it. 00:33:42.477 [2024-11-27 07:28:53.375067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.477 [2024-11-27 07:28:53.375094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.477 qpair failed and we were unable to recover it. 00:33:42.477 [2024-11-27 07:28:53.375463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.477 [2024-11-27 07:28:53.375498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.477 qpair failed and we were unable to recover it. 00:33:42.477 [2024-11-27 07:28:53.375853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.477 [2024-11-27 07:28:53.375881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.477 qpair failed and we were unable to recover it. 00:33:42.477 [2024-11-27 07:28:53.376190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.477 [2024-11-27 07:28:53.376221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.477 qpair failed and we were unable to recover it. 00:33:42.477 [2024-11-27 07:28:53.376562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.477 [2024-11-27 07:28:53.376591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.477 qpair failed and we were unable to recover it. 00:33:42.477 [2024-11-27 07:28:53.376907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.477 [2024-11-27 07:28:53.376935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.477 qpair failed and we were unable to recover it. 00:33:42.477 [2024-11-27 07:28:53.377343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.477 [2024-11-27 07:28:53.377372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.477 qpair failed and we were unable to recover it. 00:33:42.477 [2024-11-27 07:28:53.377733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.477 [2024-11-27 07:28:53.377762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.477 qpair failed and we were unable to recover it. 00:33:42.477 [2024-11-27 07:28:53.378184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.477 [2024-11-27 07:28:53.378214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.477 qpair failed and we were unable to recover it. 00:33:42.477 [2024-11-27 07:28:53.378418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.477 [2024-11-27 07:28:53.378446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.477 qpair failed and we were unable to recover it. 00:33:42.477 [2024-11-27 07:28:53.378664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.477 [2024-11-27 07:28:53.378693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.477 qpair failed and we were unable to recover it. 00:33:42.477 [2024-11-27 07:28:53.379097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.477 [2024-11-27 07:28:53.379125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.477 qpair failed and we were unable to recover it. 00:33:42.477 [2024-11-27 07:28:53.379393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.477 [2024-11-27 07:28:53.379422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.477 qpair failed and we were unable to recover it. 00:33:42.477 [2024-11-27 07:28:53.379767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.477 [2024-11-27 07:28:53.379795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.477 qpair failed and we were unable to recover it. 00:33:42.477 [2024-11-27 07:28:53.380486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.380523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.478 [2024-11-27 07:28:53.380830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.380867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.478 [2024-11-27 07:28:53.381081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.381109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.478 [2024-11-27 07:28:53.381482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.381512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.478 [2024-11-27 07:28:53.381877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.381907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.478 [2024-11-27 07:28:53.382142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.382182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.478 [2024-11-27 07:28:53.382587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.382616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.478 [2024-11-27 07:28:53.382824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.382853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.478 [2024-11-27 07:28:53.383214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.383244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.478 [2024-11-27 07:28:53.383594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.383623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.478 [2024-11-27 07:28:53.383845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.383873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.478 [2024-11-27 07:28:53.384199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.384230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.478 [2024-11-27 07:28:53.384619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.384648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.478 [2024-11-27 07:28:53.384882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.384909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.478 [2024-11-27 07:28:53.385272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.385309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.478 [2024-11-27 07:28:53.385552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.385581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.478 [2024-11-27 07:28:53.385945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.385973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.478 [2024-11-27 07:28:53.386202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.386231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.478 [2024-11-27 07:28:53.386593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.386622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.478 [2024-11-27 07:28:53.386721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.386748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.478 [2024-11-27 07:28:53.387289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.387414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b74000b90 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.478 [2024-11-27 07:28:53.387711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.387749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b74000b90 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.478 [2024-11-27 07:28:53.388119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.388150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b74000b90 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.478 [2024-11-27 07:28:53.388550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.388657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b74000b90 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.478 [2024-11-27 07:28:53.388946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.388983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b74000b90 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.478 [2024-11-27 07:28:53.389390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.389500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b74000b90 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.478 [2024-11-27 07:28:53.389891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.389923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.478 [2024-11-27 07:28:53.390268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.390299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.478 [2024-11-27 07:28:53.390623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.390652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.478 [2024-11-27 07:28:53.391050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.391078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.478 [2024-11-27 07:28:53.391438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.391469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.478 [2024-11-27 07:28:53.391828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.391856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.478 [2024-11-27 07:28:53.392084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.392115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.478 [2024-11-27 07:28:53.392387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.392418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.478 [2024-11-27 07:28:53.392815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.392843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.478 [2024-11-27 07:28:53.393189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.393221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.478 [2024-11-27 07:28:53.393595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.393624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.478 [2024-11-27 07:28:53.393987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.394015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.478 [2024-11-27 07:28:53.394396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.478 [2024-11-27 07:28:53.394426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.478 qpair failed and we were unable to recover it. 00:33:42.479 [2024-11-27 07:28:53.394791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.479 [2024-11-27 07:28:53.394819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.479 qpair failed and we were unable to recover it. 00:33:42.479 [2024-11-27 07:28:53.395190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.479 [2024-11-27 07:28:53.395220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.479 qpair failed and we were unable to recover it. 00:33:42.479 [2024-11-27 07:28:53.395601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.479 [2024-11-27 07:28:53.395638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.479 qpair failed and we were unable to recover it. 00:33:42.479 [2024-11-27 07:28:53.396018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.479 [2024-11-27 07:28:53.396046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.479 qpair failed and we were unable to recover it. 00:33:42.479 [2024-11-27 07:28:53.396264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.479 [2024-11-27 07:28:53.396293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.479 qpair failed and we were unable to recover it. 00:33:42.479 [2024-11-27 07:28:53.396566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.479 [2024-11-27 07:28:53.396595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.479 qpair failed and we were unable to recover it. 00:33:42.479 [2024-11-27 07:28:53.396921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.479 [2024-11-27 07:28:53.396949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.479 qpair failed and we were unable to recover it. 00:33:42.479 [2024-11-27 07:28:53.397334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.479 [2024-11-27 07:28:53.397363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.479 qpair failed and we were unable to recover it. 00:33:42.479 [2024-11-27 07:28:53.397589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.479 [2024-11-27 07:28:53.397618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.479 qpair failed and we were unable to recover it. 00:33:42.479 [2024-11-27 07:28:53.398000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.479 [2024-11-27 07:28:53.398027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.479 qpair failed and we were unable to recover it. 00:33:42.479 [2024-11-27 07:28:53.398188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.479 [2024-11-27 07:28:53.398219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.479 qpair failed and we were unable to recover it. 00:33:42.479 [2024-11-27 07:28:53.398480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.479 [2024-11-27 07:28:53.398509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.479 qpair failed and we were unable to recover it. 00:33:42.479 [2024-11-27 07:28:53.398875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.479 [2024-11-27 07:28:53.398903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.479 qpair failed and we were unable to recover it. 00:33:42.479 [2024-11-27 07:28:53.399319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.479 [2024-11-27 07:28:53.399349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.479 qpair failed and we were unable to recover it. 00:33:42.479 [2024-11-27 07:28:53.399724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.479 [2024-11-27 07:28:53.399762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.479 qpair failed and we were unable to recover it. 00:33:42.479 [2024-11-27 07:28:53.399985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.479 [2024-11-27 07:28:53.400017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.479 qpair failed and we were unable to recover it. 00:33:42.479 [2024-11-27 07:28:53.400245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.479 [2024-11-27 07:28:53.400276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.479 qpair failed and we were unable to recover it. 00:33:42.479 [2024-11-27 07:28:53.400626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.479 [2024-11-27 07:28:53.400655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.479 qpair failed and we were unable to recover it. 00:33:42.479 [2024-11-27 07:28:53.401017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.479 [2024-11-27 07:28:53.401046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.479 qpair failed and we were unable to recover it. 00:33:42.479 [2024-11-27 07:28:53.401404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.479 [2024-11-27 07:28:53.401435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.479 qpair failed and we were unable to recover it. 00:33:42.479 [2024-11-27 07:28:53.401772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.479 [2024-11-27 07:28:53.401800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.479 qpair failed and we were unable to recover it. 00:33:42.479 [2024-11-27 07:28:53.402083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.479 [2024-11-27 07:28:53.402111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.479 qpair failed and we were unable to recover it. 00:33:42.479 [2024-11-27 07:28:53.402404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.479 [2024-11-27 07:28:53.402434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.479 qpair failed and we were unable to recover it. 00:33:42.479 [2024-11-27 07:28:53.402656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.479 [2024-11-27 07:28:53.402685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.479 qpair failed and we were unable to recover it. 00:33:42.479 [2024-11-27 07:28:53.403070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.479 [2024-11-27 07:28:53.403098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.479 qpair failed and we were unable to recover it. 00:33:42.479 [2024-11-27 07:28:53.403370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.479 [2024-11-27 07:28:53.403399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.479 qpair failed and we were unable to recover it. 00:33:42.479 [2024-11-27 07:28:53.403533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.479 [2024-11-27 07:28:53.403563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.479 qpair failed and we were unable to recover it. 00:33:42.479 [2024-11-27 07:28:53.403916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.479 [2024-11-27 07:28:53.403944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.479 qpair failed and we were unable to recover it. 00:33:42.479 [2024-11-27 07:28:53.404306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.479 [2024-11-27 07:28:53.404337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.479 qpair failed and we were unable to recover it. 00:33:42.479 [2024-11-27 07:28:53.404710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.479 [2024-11-27 07:28:53.404744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.479 qpair failed and we were unable to recover it. 00:33:42.479 [2024-11-27 07:28:53.405083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.479 [2024-11-27 07:28:53.405112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.479 qpair failed and we were unable to recover it. 00:33:42.479 [2024-11-27 07:28:53.405513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.479 [2024-11-27 07:28:53.405543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.479 qpair failed and we were unable to recover it. 00:33:42.479 [2024-11-27 07:28:53.405896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.479 [2024-11-27 07:28:53.405926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.479 qpair failed and we were unable to recover it. 00:33:42.479 [2024-11-27 07:28:53.406299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.479 [2024-11-27 07:28:53.406328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.479 qpair failed and we were unable to recover it. 00:33:42.479 [2024-11-27 07:28:53.406681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.479 [2024-11-27 07:28:53.406711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.479 qpair failed and we were unable to recover it. 00:33:42.479 [2024-11-27 07:28:53.406934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.479 [2024-11-27 07:28:53.406962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.479 qpair failed and we were unable to recover it. 00:33:42.479 [2024-11-27 07:28:53.407318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.479 [2024-11-27 07:28:53.407350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.479 qpair failed and we were unable to recover it. 00:33:42.480 [2024-11-27 07:28:53.407729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.480 [2024-11-27 07:28:53.407758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.480 qpair failed and we were unable to recover it. 00:33:42.480 [2024-11-27 07:28:53.408119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.480 [2024-11-27 07:28:53.408147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.480 qpair failed and we were unable to recover it. 00:33:42.480 [2024-11-27 07:28:53.408526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.480 [2024-11-27 07:28:53.408557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.480 qpair failed and we were unable to recover it. 00:33:42.480 [2024-11-27 07:28:53.408924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.480 [2024-11-27 07:28:53.408952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.480 qpair failed and we were unable to recover it. 00:33:42.480 [2024-11-27 07:28:53.409298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.480 [2024-11-27 07:28:53.409328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.480 qpair failed and we were unable to recover it. 00:33:42.480 [2024-11-27 07:28:53.409762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.480 [2024-11-27 07:28:53.409792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.480 qpair failed and we were unable to recover it. 00:33:42.480 [2024-11-27 07:28:53.410018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.480 [2024-11-27 07:28:53.410047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.480 qpair failed and we were unable to recover it. 00:33:42.480 [2024-11-27 07:28:53.410428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.480 [2024-11-27 07:28:53.410457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.480 qpair failed and we were unable to recover it. 00:33:42.480 [2024-11-27 07:28:53.410821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.480 [2024-11-27 07:28:53.410850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.480 qpair failed and we were unable to recover it. 00:33:42.480 [2024-11-27 07:28:53.411091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.480 [2024-11-27 07:28:53.411119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.480 qpair failed and we were unable to recover it. 00:33:42.480 [2024-11-27 07:28:53.411512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.480 [2024-11-27 07:28:53.411541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.480 qpair failed and we were unable to recover it. 00:33:42.480 [2024-11-27 07:28:53.411918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.480 [2024-11-27 07:28:53.411946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.480 qpair failed and we were unable to recover it. 00:33:42.480 [2024-11-27 07:28:53.412206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.480 [2024-11-27 07:28:53.412236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.480 qpair failed and we were unable to recover it. 00:33:42.480 [2024-11-27 07:28:53.412626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.480 [2024-11-27 07:28:53.412654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.480 qpair failed and we were unable to recover it. 00:33:42.480 [2024-11-27 07:28:53.413015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.480 [2024-11-27 07:28:53.413043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.480 qpair failed and we were unable to recover it. 00:33:42.480 [2024-11-27 07:28:53.413410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.480 [2024-11-27 07:28:53.413440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.480 qpair failed and we were unable to recover it. 00:33:42.480 [2024-11-27 07:28:53.413811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.480 [2024-11-27 07:28:53.413840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.480 qpair failed and we were unable to recover it. 00:33:42.480 [2024-11-27 07:28:53.414213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.480 [2024-11-27 07:28:53.414243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.480 qpair failed and we were unable to recover it. 00:33:42.480 [2024-11-27 07:28:53.414341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.480 [2024-11-27 07:28:53.414368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.480 qpair failed and we were unable to recover it. 00:33:42.480 [2024-11-27 07:28:53.414728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.480 [2024-11-27 07:28:53.414755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.480 qpair failed and we were unable to recover it. 00:33:42.480 [2024-11-27 07:28:53.415130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.480 [2024-11-27 07:28:53.415169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.480 qpair failed and we were unable to recover it. 00:33:42.480 [2024-11-27 07:28:53.415416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.480 [2024-11-27 07:28:53.415444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.480 qpair failed and we were unable to recover it. 00:33:42.480 [2024-11-27 07:28:53.415783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.480 [2024-11-27 07:28:53.415811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.480 qpair failed and we were unable to recover it. 00:33:42.480 [2024-11-27 07:28:53.416035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.480 [2024-11-27 07:28:53.416064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.480 qpair failed and we were unable to recover it. 00:33:42.480 [2024-11-27 07:28:53.416428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.480 [2024-11-27 07:28:53.416458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.480 qpair failed and we were unable to recover it. 00:33:42.480 [2024-11-27 07:28:53.416830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.480 [2024-11-27 07:28:53.416858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.480 qpair failed and we were unable to recover it. 00:33:42.480 [2024-11-27 07:28:53.417088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.480 [2024-11-27 07:28:53.417116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.480 qpair failed and we were unable to recover it. 00:33:42.480 [2024-11-27 07:28:53.417300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.480 [2024-11-27 07:28:53.417329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.480 qpair failed and we were unable to recover it. 00:33:42.480 [2024-11-27 07:28:53.417695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.480 [2024-11-27 07:28:53.417723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.480 qpair failed and we were unable to recover it. 00:33:42.480 [2024-11-27 07:28:53.418089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.480 [2024-11-27 07:28:53.418117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.480 qpair failed and we were unable to recover it. 00:33:42.480 [2024-11-27 07:28:53.418495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.480 [2024-11-27 07:28:53.418527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.480 qpair failed and we were unable to recover it. 00:33:42.480 [2024-11-27 07:28:53.418755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.480 [2024-11-27 07:28:53.418783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.480 qpair failed and we were unable to recover it. 00:33:42.480 [2024-11-27 07:28:53.419020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.480 [2024-11-27 07:28:53.419047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.480 qpair failed and we were unable to recover it. 00:33:42.480 [2024-11-27 07:28:53.419428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.480 [2024-11-27 07:28:53.419464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.480 qpair failed and we were unable to recover it. 00:33:42.480 [2024-11-27 07:28:53.419851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.480 [2024-11-27 07:28:53.419880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.480 qpair failed and we were unable to recover it. 00:33:42.480 [2024-11-27 07:28:53.420249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.480 [2024-11-27 07:28:53.420279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.480 qpair failed and we were unable to recover it. 00:33:42.480 [2024-11-27 07:28:53.420531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.480 [2024-11-27 07:28:53.420558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.481 qpair failed and we were unable to recover it. 00:33:42.481 [2024-11-27 07:28:53.420926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.481 [2024-11-27 07:28:53.420954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.481 qpair failed and we were unable to recover it. 00:33:42.481 [2024-11-27 07:28:53.421323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.481 [2024-11-27 07:28:53.421353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.481 qpair failed and we were unable to recover it. 00:33:42.481 [2024-11-27 07:28:53.421587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.481 [2024-11-27 07:28:53.421615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.481 qpair failed and we were unable to recover it. 00:33:42.481 [2024-11-27 07:28:53.421866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.481 [2024-11-27 07:28:53.421896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.481 qpair failed and we were unable to recover it. 00:33:42.481 [2024-11-27 07:28:53.422150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.481 [2024-11-27 07:28:53.422190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.481 qpair failed and we were unable to recover it. 00:33:42.481 [2024-11-27 07:28:53.422581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.481 [2024-11-27 07:28:53.422609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.481 qpair failed and we were unable to recover it. 00:33:42.481 [2024-11-27 07:28:53.422955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.481 [2024-11-27 07:28:53.422986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.481 qpair failed and we were unable to recover it. 00:33:42.481 [2024-11-27 07:28:53.423206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.481 [2024-11-27 07:28:53.423235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.481 qpair failed and we were unable to recover it. 00:33:42.481 [2024-11-27 07:28:53.423574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.481 [2024-11-27 07:28:53.423603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.481 qpair failed and we were unable to recover it. 00:33:42.481 [2024-11-27 07:28:53.423970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.481 [2024-11-27 07:28:53.423998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.481 qpair failed and we were unable to recover it. 00:33:42.481 [2024-11-27 07:28:53.424415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.481 [2024-11-27 07:28:53.424446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.481 qpair failed and we were unable to recover it. 00:33:42.481 [2024-11-27 07:28:53.424797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.481 [2024-11-27 07:28:53.424826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.481 qpair failed and we were unable to recover it. 00:33:42.481 [2024-11-27 07:28:53.425188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.481 [2024-11-27 07:28:53.425218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.481 qpair failed and we were unable to recover it. 00:33:42.481 [2024-11-27 07:28:53.425556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.481 [2024-11-27 07:28:53.425586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.481 qpair failed and we were unable to recover it. 00:33:42.481 [2024-11-27 07:28:53.425950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.481 [2024-11-27 07:28:53.425978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.481 qpair failed and we were unable to recover it. 00:33:42.481 [2024-11-27 07:28:53.426326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.481 [2024-11-27 07:28:53.426355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.481 qpair failed and we were unable to recover it. 00:33:42.481 [2024-11-27 07:28:53.426735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.481 [2024-11-27 07:28:53.426763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.481 qpair failed and we were unable to recover it. 00:33:42.481 [2024-11-27 07:28:53.426996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.481 [2024-11-27 07:28:53.427026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.481 qpair failed and we were unable to recover it. 00:33:42.481 [2024-11-27 07:28:53.427390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.481 [2024-11-27 07:28:53.427419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.481 qpair failed and we were unable to recover it. 00:33:42.481 [2024-11-27 07:28:53.427653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.481 [2024-11-27 07:28:53.427681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.481 qpair failed and we were unable to recover it. 00:33:42.481 [2024-11-27 07:28:53.428085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.481 [2024-11-27 07:28:53.428113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.481 qpair failed and we were unable to recover it. 00:33:42.481 [2024-11-27 07:28:53.428514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.481 [2024-11-27 07:28:53.428544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.481 qpair failed and we were unable to recover it. 00:33:42.481 [2024-11-27 07:28:53.428894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.481 [2024-11-27 07:28:53.428924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.481 qpair failed and we were unable to recover it. 00:33:42.481 [2024-11-27 07:28:53.429144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.481 [2024-11-27 07:28:53.429199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.481 qpair failed and we were unable to recover it. 00:33:42.481 [2024-11-27 07:28:53.429582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.481 [2024-11-27 07:28:53.429611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.481 qpair failed and we were unable to recover it. 00:33:42.481 [2024-11-27 07:28:53.429858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.481 [2024-11-27 07:28:53.429889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.481 qpair failed and we were unable to recover it. 00:33:42.481 [2024-11-27 07:28:53.430245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.481 [2024-11-27 07:28:53.430276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.481 qpair failed and we were unable to recover it. 00:33:42.481 [2024-11-27 07:28:53.430524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.481 [2024-11-27 07:28:53.430553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.481 qpair failed and we were unable to recover it. 00:33:42.481 [2024-11-27 07:28:53.430912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.481 [2024-11-27 07:28:53.430940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.481 qpair failed and we were unable to recover it. 00:33:42.481 [2024-11-27 07:28:53.431321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.481 [2024-11-27 07:28:53.431350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.481 qpair failed and we were unable to recover it. 00:33:42.481 [2024-11-27 07:28:53.431709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.481 [2024-11-27 07:28:53.431738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.481 qpair failed and we were unable to recover it. 00:33:42.481 [2024-11-27 07:28:53.432096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.481 [2024-11-27 07:28:53.432124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.481 qpair failed and we were unable to recover it. 00:33:42.481 [2024-11-27 07:28:53.432407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.481 [2024-11-27 07:28:53.432436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.481 qpair failed and we were unable to recover it. 00:33:42.481 [2024-11-27 07:28:53.432819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.481 [2024-11-27 07:28:53.432848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.481 qpair failed and we were unable to recover it. 00:33:42.481 [2024-11-27 07:28:53.433073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.481 [2024-11-27 07:28:53.433101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.481 qpair failed and we were unable to recover it. 00:33:42.481 [2024-11-27 07:28:53.433562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.482 [2024-11-27 07:28:53.433592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.482 qpair failed and we were unable to recover it. 00:33:42.482 [2024-11-27 07:28:53.433953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.482 [2024-11-27 07:28:53.433981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.482 qpair failed and we were unable to recover it. 00:33:42.482 [2024-11-27 07:28:53.434233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.482 [2024-11-27 07:28:53.434264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.482 qpair failed and we were unable to recover it. 00:33:42.482 [2024-11-27 07:28:53.434624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.482 [2024-11-27 07:28:53.434654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.482 qpair failed and we were unable to recover it. 00:33:42.482 [2024-11-27 07:28:53.434877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.482 [2024-11-27 07:28:53.434906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.482 qpair failed and we were unable to recover it. 00:33:42.482 [2024-11-27 07:28:53.435294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.482 [2024-11-27 07:28:53.435323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.482 qpair failed and we were unable to recover it. 00:33:42.482 [2024-11-27 07:28:53.435708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.482 [2024-11-27 07:28:53.435737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.482 qpair failed and we were unable to recover it. 00:33:42.482 [2024-11-27 07:28:53.436087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.482 [2024-11-27 07:28:53.436117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.482 qpair failed and we were unable to recover it. 00:33:42.482 [2024-11-27 07:28:53.436481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.482 [2024-11-27 07:28:53.436510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.482 qpair failed and we were unable to recover it. 00:33:42.482 [2024-11-27 07:28:53.436849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.482 [2024-11-27 07:28:53.436879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.482 qpair failed and we were unable to recover it. 00:33:42.482 [2024-11-27 07:28:53.437242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.482 [2024-11-27 07:28:53.437271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.482 qpair failed and we were unable to recover it. 00:33:42.482 [2024-11-27 07:28:53.437479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.482 [2024-11-27 07:28:53.437507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.482 qpair failed and we were unable to recover it. 00:33:42.482 [2024-11-27 07:28:53.437797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.482 [2024-11-27 07:28:53.437825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.482 qpair failed and we were unable to recover it. 00:33:42.482 [2024-11-27 07:28:53.438182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.482 [2024-11-27 07:28:53.438212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.482 qpair failed and we were unable to recover it. 00:33:42.482 [2024-11-27 07:28:53.438528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.482 [2024-11-27 07:28:53.438558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.482 qpair failed and we were unable to recover it. 00:33:42.482 [2024-11-27 07:28:53.438931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.482 [2024-11-27 07:28:53.438966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.482 qpair failed and we were unable to recover it. 00:33:42.482 [2024-11-27 07:28:53.439316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.482 [2024-11-27 07:28:53.439349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.482 qpair failed and we were unable to recover it. 00:33:42.482 [2024-11-27 07:28:53.439614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.482 [2024-11-27 07:28:53.439646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.482 qpair failed and we were unable to recover it. 00:33:42.482 [2024-11-27 07:28:53.439995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.482 [2024-11-27 07:28:53.440024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.482 qpair failed and we were unable to recover it. 00:33:42.482 [2024-11-27 07:28:53.440395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.482 [2024-11-27 07:28:53.440423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.482 qpair failed and we were unable to recover it. 00:33:42.482 [2024-11-27 07:28:53.440780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.482 [2024-11-27 07:28:53.440809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.482 qpair failed and we were unable to recover it. 00:33:42.482 [2024-11-27 07:28:53.441178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.482 [2024-11-27 07:28:53.441208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.482 qpair failed and we were unable to recover it. 00:33:42.482 [2024-11-27 07:28:53.441602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.482 [2024-11-27 07:28:53.441631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.482 qpair failed and we were unable to recover it. 00:33:42.482 [2024-11-27 07:28:53.441992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.482 [2024-11-27 07:28:53.442020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.482 qpair failed and we were unable to recover it. 00:33:42.482 [2024-11-27 07:28:53.442114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.482 [2024-11-27 07:28:53.442141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.482 qpair failed and we were unable to recover it. 00:33:42.482 [2024-11-27 07:28:53.442566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.482 [2024-11-27 07:28:53.442596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.482 qpair failed and we were unable to recover it. 00:33:42.482 [2024-11-27 07:28:53.442785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.482 [2024-11-27 07:28:53.442812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.482 qpair failed and we were unable to recover it. 00:33:42.482 [2024-11-27 07:28:53.443217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.482 [2024-11-27 07:28:53.443247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.482 qpair failed and we were unable to recover it. 00:33:42.482 [2024-11-27 07:28:53.443615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.482 [2024-11-27 07:28:53.443644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.482 qpair failed and we were unable to recover it. 00:33:42.482 [2024-11-27 07:28:53.444000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.482 [2024-11-27 07:28:53.444029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.482 qpair failed and we were unable to recover it. 00:33:42.482 [2024-11-27 07:28:53.444399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.482 [2024-11-27 07:28:53.444429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.482 qpair failed and we were unable to recover it. 00:33:42.482 [2024-11-27 07:28:53.444671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.482 [2024-11-27 07:28:53.444699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.482 qpair failed and we were unable to recover it. 00:33:42.482 [2024-11-27 07:28:53.445066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.482 [2024-11-27 07:28:53.445096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.482 qpair failed and we were unable to recover it. 00:33:42.482 [2024-11-27 07:28:53.445466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.482 [2024-11-27 07:28:53.445495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.482 qpair failed and we were unable to recover it. 00:33:42.482 [2024-11-27 07:28:53.445844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.482 [2024-11-27 07:28:53.445873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.483 [2024-11-27 07:28:53.446197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.483 [2024-11-27 07:28:53.446226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.483 [2024-11-27 07:28:53.446461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.483 [2024-11-27 07:28:53.446489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.483 [2024-11-27 07:28:53.446880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.483 [2024-11-27 07:28:53.446909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.483 [2024-11-27 07:28:53.447281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.483 [2024-11-27 07:28:53.447310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.483 [2024-11-27 07:28:53.447687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.483 [2024-11-27 07:28:53.447717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.483 [2024-11-27 07:28:53.448084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.483 [2024-11-27 07:28:53.448113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.483 [2024-11-27 07:28:53.448407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.483 [2024-11-27 07:28:53.448439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.483 [2024-11-27 07:28:53.448799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.483 [2024-11-27 07:28:53.448827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.483 [2024-11-27 07:28:53.449090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.483 [2024-11-27 07:28:53.449121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.483 [2024-11-27 07:28:53.449511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.483 [2024-11-27 07:28:53.449541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.483 [2024-11-27 07:28:53.449770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.483 [2024-11-27 07:28:53.449798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.483 [2024-11-27 07:28:53.450172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.483 [2024-11-27 07:28:53.450202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.483 [2024-11-27 07:28:53.450527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.483 [2024-11-27 07:28:53.450556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.483 [2024-11-27 07:28:53.450922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.483 [2024-11-27 07:28:53.450950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.483 [2024-11-27 07:28:53.451300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.483 [2024-11-27 07:28:53.451330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.483 [2024-11-27 07:28:53.451700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.483 [2024-11-27 07:28:53.451728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.483 [2024-11-27 07:28:53.452112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.483 [2024-11-27 07:28:53.452141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.483 [2024-11-27 07:28:53.452377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.483 [2024-11-27 07:28:53.452406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.483 [2024-11-27 07:28:53.452758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.483 [2024-11-27 07:28:53.452787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.483 [2024-11-27 07:28:53.453156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.483 [2024-11-27 07:28:53.453196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.483 [2024-11-27 07:28:53.453544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.483 [2024-11-27 07:28:53.453573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.483 [2024-11-27 07:28:53.453924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.483 [2024-11-27 07:28:53.453959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.483 [2024-11-27 07:28:53.454300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.483 [2024-11-27 07:28:53.454332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.483 [2024-11-27 07:28:53.454695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.483 [2024-11-27 07:28:53.454723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.483 [2024-11-27 07:28:53.455078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.483 [2024-11-27 07:28:53.455107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.483 [2024-11-27 07:28:53.455258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.483 [2024-11-27 07:28:53.455286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.483 [2024-11-27 07:28:53.455651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.483 [2024-11-27 07:28:53.455680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.483 [2024-11-27 07:28:53.456049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.483 [2024-11-27 07:28:53.456079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.483 [2024-11-27 07:28:53.456461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.483 [2024-11-27 07:28:53.456490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.483 [2024-11-27 07:28:53.456839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.483 [2024-11-27 07:28:53.456869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.483 [2024-11-27 07:28:53.457235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.483 [2024-11-27 07:28:53.457265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.483 [2024-11-27 07:28:53.457620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.483 [2024-11-27 07:28:53.457650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.483 [2024-11-27 07:28:53.457869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.483 [2024-11-27 07:28:53.457897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.483 [2024-11-27 07:28:53.458300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.483 [2024-11-27 07:28:53.458331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.483 [2024-11-27 07:28:53.458702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.483 [2024-11-27 07:28:53.458732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.483 [2024-11-27 07:28:53.458975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.483 [2024-11-27 07:28:53.459008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.483 [2024-11-27 07:28:53.459369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.483 [2024-11-27 07:28:53.459399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.483 [2024-11-27 07:28:53.459628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.483 [2024-11-27 07:28:53.459657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.483 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.460037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.484 [2024-11-27 07:28:53.460067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.484 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.460400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.484 [2024-11-27 07:28:53.460431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.484 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.460786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.484 [2024-11-27 07:28:53.460815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.484 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.461178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.484 [2024-11-27 07:28:53.461208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.484 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.461536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.484 [2024-11-27 07:28:53.461564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.484 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.461920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.484 [2024-11-27 07:28:53.461949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.484 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.462214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.484 [2024-11-27 07:28:53.462243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.484 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.462618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.484 [2024-11-27 07:28:53.462647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.484 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.462997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.484 [2024-11-27 07:28:53.463025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.484 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.463368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.484 [2024-11-27 07:28:53.463396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.484 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.463778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.484 [2024-11-27 07:28:53.463814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.484 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.464193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.484 [2024-11-27 07:28:53.464224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.484 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.464587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.484 [2024-11-27 07:28:53.464617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.484 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.464977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.484 [2024-11-27 07:28:53.465006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.484 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.465401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.484 [2024-11-27 07:28:53.465430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.484 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.465804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.484 [2024-11-27 07:28:53.465833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.484 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.466069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.484 [2024-11-27 07:28:53.466098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.484 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.466378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.484 [2024-11-27 07:28:53.466408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.484 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.466768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.484 [2024-11-27 07:28:53.466796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.484 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.467173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.484 [2024-11-27 07:28:53.467203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.484 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.467639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.484 [2024-11-27 07:28:53.467667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.484 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.468049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.484 [2024-11-27 07:28:53.468078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.484 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.468307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.484 [2024-11-27 07:28:53.468337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.484 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.468644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.484 [2024-11-27 07:28:53.468673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.484 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.469044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.484 [2024-11-27 07:28:53.469075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.484 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.469424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.484 [2024-11-27 07:28:53.469455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.484 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.469821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.484 [2024-11-27 07:28:53.469850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.484 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.470207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.484 [2024-11-27 07:28:53.470238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.484 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.470445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.484 [2024-11-27 07:28:53.470476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.484 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.470840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.484 [2024-11-27 07:28:53.470872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.484 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.471105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.484 [2024-11-27 07:28:53.471132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.484 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.471371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.484 [2024-11-27 07:28:53.471400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.484 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.471654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.484 [2024-11-27 07:28:53.471687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.484 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.472032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.484 [2024-11-27 07:28:53.472062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.484 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.472294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.484 [2024-11-27 07:28:53.472324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.484 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.472696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.484 [2024-11-27 07:28:53.472727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.484 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.473091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.484 [2024-11-27 07:28:53.473120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.484 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.473386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.484 [2024-11-27 07:28:53.473422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.484 qpair failed and we were unable to recover it. 00:33:42.484 [2024-11-27 07:28:53.473781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.473811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.485 [2024-11-27 07:28:53.474182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.474218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.485 [2024-11-27 07:28:53.474582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.474612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.485 [2024-11-27 07:28:53.474960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.474989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.485 [2024-11-27 07:28:53.475334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.475364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.485 [2024-11-27 07:28:53.475734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.475762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.485 [2024-11-27 07:28:53.476100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.476130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.485 [2024-11-27 07:28:53.476523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.476553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.485 [2024-11-27 07:28:53.476912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.476941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.485 [2024-11-27 07:28:53.477182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.477213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.485 [2024-11-27 07:28:53.477462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.477495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.485 [2024-11-27 07:28:53.477863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.477893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.485 [2024-11-27 07:28:53.478232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.478264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.485 [2024-11-27 07:28:53.478658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.478687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.485 [2024-11-27 07:28:53.479082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.479112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.485 [2024-11-27 07:28:53.479544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.479575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.485 [2024-11-27 07:28:53.479925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.479954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.485 [2024-11-27 07:28:53.480329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.480360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.485 [2024-11-27 07:28:53.480780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.480808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.485 [2024-11-27 07:28:53.481197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.481244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.485 [2024-11-27 07:28:53.481570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.481598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.485 [2024-11-27 07:28:53.481951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.481983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.485 [2024-11-27 07:28:53.482339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.482369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.485 [2024-11-27 07:28:53.482745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.482774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.485 [2024-11-27 07:28:53.483031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.483060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.485 [2024-11-27 07:28:53.483199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.483228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.485 [2024-11-27 07:28:53.483556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.483585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.485 [2024-11-27 07:28:53.483869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.483901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.485 [2024-11-27 07:28:53.484175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.484206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.485 [2024-11-27 07:28:53.484552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.484581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.485 [2024-11-27 07:28:53.484812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.484843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.485 [2024-11-27 07:28:53.485233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.485265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.485 [2024-11-27 07:28:53.485476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.485507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.485 [2024-11-27 07:28:53.485890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.485922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.485 [2024-11-27 07:28:53.486273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.486304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.485 [2024-11-27 07:28:53.486672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.486702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.485 [2024-11-27 07:28:53.486847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.486879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.485 [2024-11-27 07:28:53.487235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.487267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.485 [2024-11-27 07:28:53.487578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.485 [2024-11-27 07:28:53.487612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.485 qpair failed and we were unable to recover it. 00:33:42.486 [2024-11-27 07:28:53.487964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.486 [2024-11-27 07:28:53.487996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.486 qpair failed and we were unable to recover it. 00:33:42.486 [2024-11-27 07:28:53.488222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.486 [2024-11-27 07:28:53.488252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.486 qpair failed and we were unable to recover it. 00:33:42.486 [2024-11-27 07:28:53.488604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.486 [2024-11-27 07:28:53.488637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.486 qpair failed and we were unable to recover it. 00:33:42.486 [2024-11-27 07:28:53.489010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.486 [2024-11-27 07:28:53.489042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.486 qpair failed and we were unable to recover it. 00:33:42.486 [2024-11-27 07:28:53.489294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.486 [2024-11-27 07:28:53.489326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.486 qpair failed and we were unable to recover it. 00:33:42.486 [2024-11-27 07:28:53.489690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.486 [2024-11-27 07:28:53.489723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.486 qpair failed and we were unable to recover it. 00:33:42.486 [2024-11-27 07:28:53.490076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.486 [2024-11-27 07:28:53.490106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.486 qpair failed and we were unable to recover it. 00:33:42.486 [2024-11-27 07:28:53.490333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.486 [2024-11-27 07:28:53.490366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.486 qpair failed and we were unable to recover it. 00:33:42.486 [2024-11-27 07:28:53.490748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.486 [2024-11-27 07:28:53.490777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.486 qpair failed and we were unable to recover it. 00:33:42.486 [2024-11-27 07:28:53.491020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.486 [2024-11-27 07:28:53.491055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.486 qpair failed and we were unable to recover it. 00:33:42.486 [2024-11-27 07:28:53.491412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.486 [2024-11-27 07:28:53.491446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.486 qpair failed and we were unable to recover it. 00:33:42.486 [2024-11-27 07:28:53.491808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.486 [2024-11-27 07:28:53.491842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.486 qpair failed and we were unable to recover it. 00:33:42.486 [2024-11-27 07:28:53.492216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.486 [2024-11-27 07:28:53.492249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.486 qpair failed and we were unable to recover it. 00:33:42.486 [2024-11-27 07:28:53.492649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.486 [2024-11-27 07:28:53.492678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.486 qpair failed and we were unable to recover it. 00:33:42.486 [2024-11-27 07:28:53.493028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.486 [2024-11-27 07:28:53.493059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.486 qpair failed and we were unable to recover it. 00:33:42.486 [2024-11-27 07:28:53.493428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.486 [2024-11-27 07:28:53.493462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.486 qpair failed and we were unable to recover it. 00:33:42.486 [2024-11-27 07:28:53.493818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.486 [2024-11-27 07:28:53.493847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.486 qpair failed and we were unable to recover it. 00:33:42.486 [2024-11-27 07:28:53.493945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.486 [2024-11-27 07:28:53.493976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.486 qpair failed and we were unable to recover it. 00:33:42.486 [2024-11-27 07:28:53.494329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.486 [2024-11-27 07:28:53.494360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.486 qpair failed and we were unable to recover it. 00:33:42.486 [2024-11-27 07:28:53.494476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.486 [2024-11-27 07:28:53.494510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.486 qpair failed and we were unable to recover it. 00:33:42.486 [2024-11-27 07:28:53.494869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.486 [2024-11-27 07:28:53.494899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.486 qpair failed and we were unable to recover it. 00:33:42.486 [2024-11-27 07:28:53.495154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.486 [2024-11-27 07:28:53.495204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.486 qpair failed and we were unable to recover it. 00:33:42.486 [2024-11-27 07:28:53.495570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.486 [2024-11-27 07:28:53.495600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.486 qpair failed and we were unable to recover it. 00:33:42.486 [2024-11-27 07:28:53.495961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.486 [2024-11-27 07:28:53.495992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.486 qpair failed and we were unable to recover it. 00:33:42.486 [2024-11-27 07:28:53.496346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.486 [2024-11-27 07:28:53.496378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.486 qpair failed and we were unable to recover it. 00:33:42.486 [2024-11-27 07:28:53.496722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.486 [2024-11-27 07:28:53.496754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.486 qpair failed and we were unable to recover it. 00:33:42.486 [2024-11-27 07:28:53.497102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.486 [2024-11-27 07:28:53.497133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.486 qpair failed and we were unable to recover it. 00:33:42.486 [2024-11-27 07:28:53.497330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.486 [2024-11-27 07:28:53.497360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.486 qpair failed and we were unable to recover it. 00:33:42.486 [2024-11-27 07:28:53.497606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.486 [2024-11-27 07:28:53.497640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.486 qpair failed and we were unable to recover it. 00:33:42.486 [2024-11-27 07:28:53.498021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.486 [2024-11-27 07:28:53.498049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.486 qpair failed and we were unable to recover it. 00:33:42.486 [2024-11-27 07:28:53.498424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.486 [2024-11-27 07:28:53.498455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.486 qpair failed and we were unable to recover it. 00:33:42.486 [2024-11-27 07:28:53.498824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.498854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.487 [2024-11-27 07:28:53.499216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.499248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.487 [2024-11-27 07:28:53.499629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.499658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.487 [2024-11-27 07:28:53.499877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.499905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.487 [2024-11-27 07:28:53.500228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.500258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.487 [2024-11-27 07:28:53.500503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.500532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.487 [2024-11-27 07:28:53.500902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.500933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.487 [2024-11-27 07:28:53.501148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.501191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.487 [2024-11-27 07:28:53.501569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.501598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.487 [2024-11-27 07:28:53.501969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.501998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.487 [2024-11-27 07:28:53.502386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.502416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.487 [2024-11-27 07:28:53.502639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.502674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.487 [2024-11-27 07:28:53.503038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.503067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.487 [2024-11-27 07:28:53.503451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.503488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.487 [2024-11-27 07:28:53.503835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.503865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.487 [2024-11-27 07:28:53.504100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.504132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.487 [2024-11-27 07:28:53.504517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.504551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.487 [2024-11-27 07:28:53.504773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.504801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.487 [2024-11-27 07:28:53.505151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.505216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.487 [2024-11-27 07:28:53.505428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.505456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.487 [2024-11-27 07:28:53.505798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.505826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.487 [2024-11-27 07:28:53.506200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.506232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.487 [2024-11-27 07:28:53.506576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.506607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.487 [2024-11-27 07:28:53.506980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.507009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.487 [2024-11-27 07:28:53.507237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.507275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.487 [2024-11-27 07:28:53.507501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.507529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.487 [2024-11-27 07:28:53.507905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.507934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.487 [2024-11-27 07:28:53.508296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.508327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.487 [2024-11-27 07:28:53.508660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.508691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.487 [2024-11-27 07:28:53.508785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.508814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.487 [2024-11-27 07:28:53.509197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.509229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.487 [2024-11-27 07:28:53.509459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.509493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.487 [2024-11-27 07:28:53.509875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.509903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.487 [2024-11-27 07:28:53.510180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.510210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.487 [2024-11-27 07:28:53.510544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.510575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.487 [2024-11-27 07:28:53.510787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.510815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.487 [2024-11-27 07:28:53.511186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.511216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.487 [2024-11-27 07:28:53.511541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.511569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.487 [2024-11-27 07:28:53.511909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.487 [2024-11-27 07:28:53.511940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.487 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.512303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.488 [2024-11-27 07:28:53.512335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.488 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.512689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.488 [2024-11-27 07:28:53.512718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.488 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.512963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.488 [2024-11-27 07:28:53.512993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.488 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.513351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.488 [2024-11-27 07:28:53.513384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.488 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.513622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.488 [2024-11-27 07:28:53.513651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.488 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.513916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.488 [2024-11-27 07:28:53.513949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.488 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.514313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.488 [2024-11-27 07:28:53.514345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.488 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.514697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.488 [2024-11-27 07:28:53.514727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.488 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.515086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.488 [2024-11-27 07:28:53.515113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.488 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.515512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.488 [2024-11-27 07:28:53.515543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.488 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.515941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.488 [2024-11-27 07:28:53.515974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.488 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.516326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.488 [2024-11-27 07:28:53.516357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.488 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.516700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.488 [2024-11-27 07:28:53.516738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.488 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.517081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.488 [2024-11-27 07:28:53.517112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.488 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.517462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.488 [2024-11-27 07:28:53.517492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.488 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.517853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.488 [2024-11-27 07:28:53.517883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.488 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.518235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.488 [2024-11-27 07:28:53.518268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.488 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.518497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.488 [2024-11-27 07:28:53.518525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.488 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.518898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.488 [2024-11-27 07:28:53.518933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.488 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.519134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.488 [2024-11-27 07:28:53.519181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.488 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.519548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.488 [2024-11-27 07:28:53.519578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.488 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.519942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.488 [2024-11-27 07:28:53.519973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.488 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.520329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.488 [2024-11-27 07:28:53.520359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.488 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.520595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.488 [2024-11-27 07:28:53.520627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.488 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.520860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.488 [2024-11-27 07:28:53.520891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.488 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.521128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.488 [2024-11-27 07:28:53.521156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.488 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.521590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.488 [2024-11-27 07:28:53.521621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.488 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.521867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.488 [2024-11-27 07:28:53.521897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.488 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.522281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.488 [2024-11-27 07:28:53.522312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.488 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.522521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.488 [2024-11-27 07:28:53.522554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.488 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.522906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.488 [2024-11-27 07:28:53.522938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.488 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.523293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.488 [2024-11-27 07:28:53.523324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.488 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.523690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.488 [2024-11-27 07:28:53.523727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.488 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.524086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.488 [2024-11-27 07:28:53.524117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.488 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.524481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.488 [2024-11-27 07:28:53.524512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.488 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.524871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.488 [2024-11-27 07:28:53.524901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.488 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.525122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.488 [2024-11-27 07:28:53.525151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.488 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.525547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.488 [2024-11-27 07:28:53.525578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.488 qpair failed and we were unable to recover it. 00:33:42.488 [2024-11-27 07:28:53.525938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.489 [2024-11-27 07:28:53.525968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.489 qpair failed and we were unable to recover it. 00:33:42.489 [2024-11-27 07:28:53.526335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.489 [2024-11-27 07:28:53.526367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.489 qpair failed and we were unable to recover it. 00:33:42.489 [2024-11-27 07:28:53.526596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.489 [2024-11-27 07:28:53.526627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.489 qpair failed and we were unable to recover it. 00:33:42.489 [2024-11-27 07:28:53.527014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.489 [2024-11-27 07:28:53.527046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.489 qpair failed and we were unable to recover it. 00:33:42.489 [2024-11-27 07:28:53.527273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.489 [2024-11-27 07:28:53.527304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.489 qpair failed and we were unable to recover it. 00:33:42.489 [2024-11-27 07:28:53.527680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.489 [2024-11-27 07:28:53.527709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.489 qpair failed and we were unable to recover it. 00:33:42.489 [2024-11-27 07:28:53.527934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.489 [2024-11-27 07:28:53.527964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.489 qpair failed and we were unable to recover it. 00:33:42.489 [2024-11-27 07:28:53.528227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.489 [2024-11-27 07:28:53.528262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.489 qpair failed and we were unable to recover it. 00:33:42.489 [2024-11-27 07:28:53.528510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.489 [2024-11-27 07:28:53.528541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.489 qpair failed and we were unable to recover it. 00:33:42.489 [2024-11-27 07:28:53.528902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.489 [2024-11-27 07:28:53.528932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.489 qpair failed and we were unable to recover it. 00:33:42.489 [2024-11-27 07:28:53.529293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.489 [2024-11-27 07:28:53.529326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.489 qpair failed and we were unable to recover it. 00:33:42.489 [2024-11-27 07:28:53.529695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.489 [2024-11-27 07:28:53.529725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.489 qpair failed and we were unable to recover it. 00:33:42.489 [2024-11-27 07:28:53.529949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.489 [2024-11-27 07:28:53.529977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.489 qpair failed and we were unable to recover it. 00:33:42.489 [2024-11-27 07:28:53.530229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.489 [2024-11-27 07:28:53.530263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.489 qpair failed and we were unable to recover it. 00:33:42.489 [2024-11-27 07:28:53.530632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.489 [2024-11-27 07:28:53.530672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.489 qpair failed and we were unable to recover it. 00:33:42.489 [2024-11-27 07:28:53.531038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.489 [2024-11-27 07:28:53.531075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.489 qpair failed and we were unable to recover it. 00:33:42.489 [2024-11-27 07:28:53.531235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.489 [2024-11-27 07:28:53.531269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.489 qpair failed and we were unable to recover it. 00:33:42.489 [2024-11-27 07:28:53.531532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.489 [2024-11-27 07:28:53.531563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.489 qpair failed and we were unable to recover it. 00:33:42.489 [2024-11-27 07:28:53.531908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.489 [2024-11-27 07:28:53.531939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.489 qpair failed and we were unable to recover it. 00:33:42.489 [2024-11-27 07:28:53.532227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.489 [2024-11-27 07:28:53.532257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.489 qpair failed and we were unable to recover it. 00:33:42.489 [2024-11-27 07:28:53.532643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.489 [2024-11-27 07:28:53.532674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.489 qpair failed and we were unable to recover it. 00:33:42.489 [2024-11-27 07:28:53.533039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.489 [2024-11-27 07:28:53.533073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.489 qpair failed and we were unable to recover it. 00:33:42.489 [2024-11-27 07:28:53.533528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.489 [2024-11-27 07:28:53.533560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.489 qpair failed and we were unable to recover it. 00:33:42.489 [2024-11-27 07:28:53.533923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.489 [2024-11-27 07:28:53.533954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.489 qpair failed and we were unable to recover it. 00:33:42.489 [2024-11-27 07:28:53.534252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.489 [2024-11-27 07:28:53.534286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.489 qpair failed and we were unable to recover it. 00:33:42.489 [2024-11-27 07:28:53.534639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.489 [2024-11-27 07:28:53.534673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.489 qpair failed and we were unable to recover it. 00:33:42.489 [2024-11-27 07:28:53.535035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.489 [2024-11-27 07:28:53.535064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.489 qpair failed and we were unable to recover it. 00:33:42.489 [2024-11-27 07:28:53.535458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.489 [2024-11-27 07:28:53.535489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.489 qpair failed and we were unable to recover it. 00:33:42.489 [2024-11-27 07:28:53.535847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.489 [2024-11-27 07:28:53.535879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.489 qpair failed and we were unable to recover it. 00:33:42.489 [2024-11-27 07:28:53.536237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.489 [2024-11-27 07:28:53.536267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.489 qpair failed and we were unable to recover it. 00:33:42.489 [2024-11-27 07:28:53.536557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.489 [2024-11-27 07:28:53.536586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.489 qpair failed and we were unable to recover it. 00:33:42.489 [2024-11-27 07:28:53.536806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.489 [2024-11-27 07:28:53.536836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.489 qpair failed and we were unable to recover it. 00:33:42.489 [2024-11-27 07:28:53.537194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.489 [2024-11-27 07:28:53.537224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.489 qpair failed and we were unable to recover it. 00:33:42.489 [2024-11-27 07:28:53.537560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.489 [2024-11-27 07:28:53.537593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.489 qpair failed and we were unable to recover it. 00:33:42.489 [2024-11-27 07:28:53.537817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.489 [2024-11-27 07:28:53.537848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.489 qpair failed and we were unable to recover it. 00:33:42.489 [2024-11-27 07:28:53.538185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.489 [2024-11-27 07:28:53.538224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.489 qpair failed and we were unable to recover it. 00:33:42.489 [2024-11-27 07:28:53.538486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.489 [2024-11-27 07:28:53.538518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.489 qpair failed and we were unable to recover it. 00:33:42.489 [2024-11-27 07:28:53.538857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.490 [2024-11-27 07:28:53.538891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.490 qpair failed and we were unable to recover it. 00:33:42.490 [2024-11-27 07:28:53.539151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.490 [2024-11-27 07:28:53.539199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.490 qpair failed and we were unable to recover it. 00:33:42.490 [2024-11-27 07:28:53.539574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.490 [2024-11-27 07:28:53.539604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.490 qpair failed and we were unable to recover it. 00:33:42.490 [2024-11-27 07:28:53.539955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.490 [2024-11-27 07:28:53.539988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.490 qpair failed and we were unable to recover it. 00:33:42.490 [2024-11-27 07:28:53.540332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.490 [2024-11-27 07:28:53.540365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.490 qpair failed and we were unable to recover it. 00:33:42.490 [2024-11-27 07:28:53.540717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.490 [2024-11-27 07:28:53.540753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.490 qpair failed and we were unable to recover it. 00:33:42.490 [2024-11-27 07:28:53.541092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.490 [2024-11-27 07:28:53.541125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.490 qpair failed and we were unable to recover it. 00:33:42.490 [2024-11-27 07:28:53.541496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.490 [2024-11-27 07:28:53.541529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.490 qpair failed and we were unable to recover it. 00:33:42.490 [2024-11-27 07:28:53.541883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.490 [2024-11-27 07:28:53.541917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.490 qpair failed and we were unable to recover it. 00:33:42.490 [2024-11-27 07:28:53.542270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.490 [2024-11-27 07:28:53.542300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.490 qpair failed and we were unable to recover it. 00:33:42.490 [2024-11-27 07:28:53.542678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.490 [2024-11-27 07:28:53.542708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.490 qpair failed and we were unable to recover it. 00:33:42.490 [2024-11-27 07:28:53.543063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.490 [2024-11-27 07:28:53.543093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.490 qpair failed and we were unable to recover it. 00:33:42.490 [2024-11-27 07:28:53.543440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.490 [2024-11-27 07:28:53.543469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.490 qpair failed and we were unable to recover it. 00:33:42.490 [2024-11-27 07:28:53.543686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.490 [2024-11-27 07:28:53.543716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.490 qpair failed and we were unable to recover it. 00:33:42.490 [2024-11-27 07:28:53.543954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.490 [2024-11-27 07:28:53.543985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.490 qpair failed and we were unable to recover it. 00:33:42.490 [2024-11-27 07:28:53.544434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.490 [2024-11-27 07:28:53.544463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.490 qpair failed and we were unable to recover it. 00:33:42.490 [2024-11-27 07:28:53.544830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.490 [2024-11-27 07:28:53.544860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.490 qpair failed and we were unable to recover it. 00:33:42.490 [2024-11-27 07:28:53.545105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.490 [2024-11-27 07:28:53.545138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.490 qpair failed and we were unable to recover it. 00:33:42.490 [2024-11-27 07:28:53.545496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.490 [2024-11-27 07:28:53.545526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.490 qpair failed and we were unable to recover it. 00:33:42.490 [2024-11-27 07:28:53.545763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.490 [2024-11-27 07:28:53.545794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.490 qpair failed and we were unable to recover it. 00:33:42.490 [2024-11-27 07:28:53.546182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.490 [2024-11-27 07:28:53.546215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.490 qpair failed and we were unable to recover it. 00:33:42.490 [2024-11-27 07:28:53.546446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.490 [2024-11-27 07:28:53.546474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.490 qpair failed and we were unable to recover it. 00:33:42.490 [2024-11-27 07:28:53.546589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.490 [2024-11-27 07:28:53.546620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.490 qpair failed and we were unable to recover it. 00:33:42.490 [2024-11-27 07:28:53.546988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.490 [2024-11-27 07:28:53.547019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.490 qpair failed and we were unable to recover it. 00:33:42.490 [2024-11-27 07:28:53.547397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.490 [2024-11-27 07:28:53.547428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.490 qpair failed and we were unable to recover it. 00:33:42.490 [2024-11-27 07:28:53.547669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.490 [2024-11-27 07:28:53.547700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.490 qpair failed and we were unable to recover it. 00:33:42.490 [2024-11-27 07:28:53.548034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.490 [2024-11-27 07:28:53.548062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.490 qpair failed and we were unable to recover it. 00:33:42.490 [2024-11-27 07:28:53.548284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.490 [2024-11-27 07:28:53.548317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.490 qpair failed and we were unable to recover it. 00:33:42.490 [2024-11-27 07:28:53.548677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.490 [2024-11-27 07:28:53.548706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.490 qpair failed and we were unable to recover it. 00:33:42.490 [2024-11-27 07:28:53.549084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.490 [2024-11-27 07:28:53.549114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.490 qpair failed and we were unable to recover it. 00:33:42.490 [2024-11-27 07:28:53.549385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.490 [2024-11-27 07:28:53.549416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.490 qpair failed and we were unable to recover it. 00:33:42.490 [2024-11-27 07:28:53.549760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.490 [2024-11-27 07:28:53.549789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.490 qpair failed and we were unable to recover it. 00:33:42.490 [2024-11-27 07:28:53.550207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.490 [2024-11-27 07:28:53.550244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.490 qpair failed and we were unable to recover it. 00:33:42.490 [2024-11-27 07:28:53.550590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.490 [2024-11-27 07:28:53.550621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.490 qpair failed and we were unable to recover it. 00:33:42.490 [2024-11-27 07:28:53.550988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.490 [2024-11-27 07:28:53.551017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.491 qpair failed and we were unable to recover it. 00:33:42.491 [2024-11-27 07:28:53.551377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.491 [2024-11-27 07:28:53.551409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.491 qpair failed and we were unable to recover it. 00:33:42.491 [2024-11-27 07:28:53.551773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.491 [2024-11-27 07:28:53.551804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.491 qpair failed and we were unable to recover it. 00:33:42.491 [2024-11-27 07:28:53.552177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.491 [2024-11-27 07:28:53.552207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.491 qpair failed and we were unable to recover it. 00:33:42.491 [2024-11-27 07:28:53.552615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.491 [2024-11-27 07:28:53.552645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.491 qpair failed and we were unable to recover it. 00:33:42.491 [2024-11-27 07:28:53.552864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.491 [2024-11-27 07:28:53.552895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.491 qpair failed and we were unable to recover it. 00:33:42.491 [2024-11-27 07:28:53.553241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.491 [2024-11-27 07:28:53.553272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.491 qpair failed and we were unable to recover it. 00:33:42.491 [2024-11-27 07:28:53.553627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.491 [2024-11-27 07:28:53.553659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.491 qpair failed and we were unable to recover it. 00:33:42.491 [2024-11-27 07:28:53.554044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.491 [2024-11-27 07:28:53.554073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.491 qpair failed and we were unable to recover it. 00:33:42.491 [2024-11-27 07:28:53.554452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.491 [2024-11-27 07:28:53.554482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.491 qpair failed and we were unable to recover it. 00:33:42.491 [2024-11-27 07:28:53.554727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.491 [2024-11-27 07:28:53.554755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.491 qpair failed and we were unable to recover it. 00:33:42.491 [2024-11-27 07:28:53.555149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.491 [2024-11-27 07:28:53.555193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.491 qpair failed and we were unable to recover it. 00:33:42.491 [2024-11-27 07:28:53.555585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.491 [2024-11-27 07:28:53.555616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.491 qpair failed and we were unable to recover it. 00:33:42.491 [2024-11-27 07:28:53.555978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.491 [2024-11-27 07:28:53.556007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.491 qpair failed and we were unable to recover it. 00:33:42.491 [2024-11-27 07:28:53.556382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.491 [2024-11-27 07:28:53.556413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.491 qpair failed and we were unable to recover it. 00:33:42.491 [2024-11-27 07:28:53.556770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.491 [2024-11-27 07:28:53.556799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.491 qpair failed and we were unable to recover it. 00:33:42.491 [2024-11-27 07:28:53.557169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.491 [2024-11-27 07:28:53.557198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.491 qpair failed and we were unable to recover it. 00:33:42.491 [2024-11-27 07:28:53.557534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.491 [2024-11-27 07:28:53.557563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.491 qpair failed and we were unable to recover it. 00:33:42.491 [2024-11-27 07:28:53.557929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.491 [2024-11-27 07:28:53.557958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.491 qpair failed and we were unable to recover it. 00:33:42.491 [2024-11-27 07:28:53.558318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.491 [2024-11-27 07:28:53.558350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.491 qpair failed and we were unable to recover it. 00:33:42.491 [2024-11-27 07:28:53.558732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.491 [2024-11-27 07:28:53.558763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.491 qpair failed and we were unable to recover it. 00:33:42.491 [2024-11-27 07:28:53.559137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.491 [2024-11-27 07:28:53.559178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.491 qpair failed and we were unable to recover it. 00:33:42.491 [2024-11-27 07:28:53.559535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.491 [2024-11-27 07:28:53.559564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.491 qpair failed and we were unable to recover it. 00:33:42.491 [2024-11-27 07:28:53.559908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.491 [2024-11-27 07:28:53.559937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.491 qpair failed and we were unable to recover it. 00:33:42.491 [2024-11-27 07:28:53.560286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.491 [2024-11-27 07:28:53.560317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.491 qpair failed and we were unable to recover it. 00:33:42.491 [2024-11-27 07:28:53.560637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.491 [2024-11-27 07:28:53.560666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.491 qpair failed and we were unable to recover it. 00:33:42.491 [2024-11-27 07:28:53.561044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.491 [2024-11-27 07:28:53.561074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.491 qpair failed and we were unable to recover it. 00:33:42.491 [2024-11-27 07:28:53.561453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.491 [2024-11-27 07:28:53.561483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.491 qpair failed and we were unable to recover it. 00:33:42.491 [2024-11-27 07:28:53.561861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.491 [2024-11-27 07:28:53.561891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.491 qpair failed and we were unable to recover it. 00:33:42.491 [2024-11-27 07:28:53.562264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.491 [2024-11-27 07:28:53.562294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.491 qpair failed and we were unable to recover it. 00:33:42.491 [2024-11-27 07:28:53.562523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.491 [2024-11-27 07:28:53.562551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.491 qpair failed and we were unable to recover it. 00:33:42.491 [2024-11-27 07:28:53.562919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.491 [2024-11-27 07:28:53.562949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.491 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.563194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.492 [2024-11-27 07:28:53.563224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.492 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.563474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.492 [2024-11-27 07:28:53.563504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.492 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.563750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.492 [2024-11-27 07:28:53.563778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.492 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.564175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.492 [2024-11-27 07:28:53.564211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.492 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.564509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.492 [2024-11-27 07:28:53.564538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.492 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.564938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.492 [2024-11-27 07:28:53.564967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.492 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.565320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.492 [2024-11-27 07:28:53.565351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.492 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.565606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.492 [2024-11-27 07:28:53.565640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.492 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.566012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.492 [2024-11-27 07:28:53.566042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.492 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.566397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.492 [2024-11-27 07:28:53.566428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.492 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.566786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.492 [2024-11-27 07:28:53.566815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.492 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.567185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.492 [2024-11-27 07:28:53.567216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.492 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.567478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.492 [2024-11-27 07:28:53.567506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.492 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.567907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.492 [2024-11-27 07:28:53.567936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.492 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.568180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.492 [2024-11-27 07:28:53.568213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.492 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.568588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.492 [2024-11-27 07:28:53.568621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.492 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.568967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.492 [2024-11-27 07:28:53.568997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.492 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.569360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.492 [2024-11-27 07:28:53.569391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.492 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.569746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.492 [2024-11-27 07:28:53.569774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.492 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.570137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.492 [2024-11-27 07:28:53.570205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.492 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.570452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.492 [2024-11-27 07:28:53.570482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.492 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.570815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.492 [2024-11-27 07:28:53.570845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.492 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.571229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.492 [2024-11-27 07:28:53.571261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.492 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.571634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.492 [2024-11-27 07:28:53.571667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.492 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.572037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.492 [2024-11-27 07:28:53.572067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.492 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.572432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.492 [2024-11-27 07:28:53.572462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.492 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.572671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.492 [2024-11-27 07:28:53.572699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.492 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.573069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.492 [2024-11-27 07:28:53.573097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.492 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.573434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.492 [2024-11-27 07:28:53.573465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.492 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.573840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.492 [2024-11-27 07:28:53.573868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.492 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.574222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.492 [2024-11-27 07:28:53.574254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.492 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.574503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.492 [2024-11-27 07:28:53.574535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.492 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.574928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.492 [2024-11-27 07:28:53.574957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.492 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.575319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.492 [2024-11-27 07:28:53.575350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.492 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.575703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.492 [2024-11-27 07:28:53.575739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.492 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.576063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.492 [2024-11-27 07:28:53.576092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.492 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.576443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.492 [2024-11-27 07:28:53.576473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.492 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.576711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.492 [2024-11-27 07:28:53.576740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.492 qpair failed and we were unable to recover it. 00:33:42.492 [2024-11-27 07:28:53.577106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.577135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.493 [2024-11-27 07:28:53.577538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.577568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.493 [2024-11-27 07:28:53.577925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.577954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.493 [2024-11-27 07:28:53.578318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.578349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.493 [2024-11-27 07:28:53.578728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.578759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.493 [2024-11-27 07:28:53.579107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.579135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.493 [2024-11-27 07:28:53.579497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.579527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.493 [2024-11-27 07:28:53.579902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.579933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.493 [2024-11-27 07:28:53.580172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.580203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.493 [2024-11-27 07:28:53.580531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.580560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.493 [2024-11-27 07:28:53.580928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.580957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.493 [2024-11-27 07:28:53.581057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.581084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.493 [2024-11-27 07:28:53.581415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.581446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.493 [2024-11-27 07:28:53.581806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.581836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.493 [2024-11-27 07:28:53.582196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.582231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.493 [2024-11-27 07:28:53.582590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.582618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.493 [2024-11-27 07:28:53.582969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.582998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.493 [2024-11-27 07:28:53.583333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.583362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.493 [2024-11-27 07:28:53.583735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.583767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.493 [2024-11-27 07:28:53.584026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.584056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.493 [2024-11-27 07:28:53.584271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.584302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.493 [2024-11-27 07:28:53.584652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.584682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.493 [2024-11-27 07:28:53.585064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.585092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.493 [2024-11-27 07:28:53.585499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.585536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.493 [2024-11-27 07:28:53.585781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.585810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.493 [2024-11-27 07:28:53.586134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.586189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.493 [2024-11-27 07:28:53.586508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.586539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.493 [2024-11-27 07:28:53.586893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.586922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.493 [2024-11-27 07:28:53.587149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.587193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.493 [2024-11-27 07:28:53.587571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.587601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.493 [2024-11-27 07:28:53.588009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.588039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.493 [2024-11-27 07:28:53.588254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.588284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.493 [2024-11-27 07:28:53.588672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.588703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.493 [2024-11-27 07:28:53.589088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.589120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.493 [2024-11-27 07:28:53.589396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.589427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.493 [2024-11-27 07:28:53.589775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.589804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.493 [2024-11-27 07:28:53.590185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.590215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.493 [2024-11-27 07:28:53.590559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.590590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.493 [2024-11-27 07:28:53.590938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.493 [2024-11-27 07:28:53.590968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.493 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.591343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.494 [2024-11-27 07:28:53.591374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.494 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.591837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.494 [2024-11-27 07:28:53.591867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.494 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.592228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.494 [2024-11-27 07:28:53.592258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.494 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.592550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.494 [2024-11-27 07:28:53.592579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.494 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.592910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.494 [2024-11-27 07:28:53.592940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.494 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.593276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.494 [2024-11-27 07:28:53.593305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.494 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.593549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.494 [2024-11-27 07:28:53.593579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.494 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.594012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.494 [2024-11-27 07:28:53.594041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.494 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.594423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.494 [2024-11-27 07:28:53.594452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.494 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.594807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.494 [2024-11-27 07:28:53.594837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.494 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.595295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.494 [2024-11-27 07:28:53.595325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.494 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.595573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.494 [2024-11-27 07:28:53.595605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.494 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.595976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.494 [2024-11-27 07:28:53.596006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.494 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.596379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.494 [2024-11-27 07:28:53.596410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.494 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.596777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.494 [2024-11-27 07:28:53.596807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.494 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.597182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.494 [2024-11-27 07:28:53.597212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.494 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.597590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.494 [2024-11-27 07:28:53.597620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.494 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.597984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.494 [2024-11-27 07:28:53.598013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.494 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.598382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.494 [2024-11-27 07:28:53.598410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.494 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.598822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.494 [2024-11-27 07:28:53.598852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.494 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.599234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.494 [2024-11-27 07:28:53.599273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.494 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.599649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.494 [2024-11-27 07:28:53.599678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.494 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.600063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.494 [2024-11-27 07:28:53.600093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.494 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.600528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.494 [2024-11-27 07:28:53.600561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.494 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.600915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.494 [2024-11-27 07:28:53.600945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.494 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.601382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.494 [2024-11-27 07:28:53.601413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.494 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.601785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.494 [2024-11-27 07:28:53.601813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.494 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.602185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.494 [2024-11-27 07:28:53.602215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.494 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.602611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.494 [2024-11-27 07:28:53.602640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.494 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.602999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.494 [2024-11-27 07:28:53.603027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.494 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.603351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.494 [2024-11-27 07:28:53.603381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.494 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.603728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.494 [2024-11-27 07:28:53.603758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.494 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.604004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.494 [2024-11-27 07:28:53.604033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.494 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.604275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.494 [2024-11-27 07:28:53.604305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.494 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.604692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.494 [2024-11-27 07:28:53.604722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.494 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.605090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.494 [2024-11-27 07:28:53.605121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.494 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.605465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.494 [2024-11-27 07:28:53.605495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.494 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.605722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.494 [2024-11-27 07:28:53.605751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.494 qpair failed and we were unable to recover it. 00:33:42.494 [2024-11-27 07:28:53.606156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.495 [2024-11-27 07:28:53.606198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.495 qpair failed and we were unable to recover it. 00:33:42.495 [2024-11-27 07:28:53.606565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.495 [2024-11-27 07:28:53.606595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.495 qpair failed and we were unable to recover it. 00:33:42.495 [2024-11-27 07:28:53.606951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.495 [2024-11-27 07:28:53.606980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.495 qpair failed and we were unable to recover it. 00:33:42.495 [2024-11-27 07:28:53.607378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.495 [2024-11-27 07:28:53.607409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.495 qpair failed and we were unable to recover it. 00:33:42.495 [2024-11-27 07:28:53.607759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.495 [2024-11-27 07:28:53.607787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.495 qpair failed and we were unable to recover it. 00:33:42.495 [2024-11-27 07:28:53.608078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.495 [2024-11-27 07:28:53.608106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.495 qpair failed and we were unable to recover it. 00:33:42.495 [2024-11-27 07:28:53.608488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.495 [2024-11-27 07:28:53.608519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.495 qpair failed and we were unable to recover it. 00:33:42.495 [2024-11-27 07:28:53.608859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.495 [2024-11-27 07:28:53.608889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.495 qpair failed and we were unable to recover it. 00:33:42.495 [2024-11-27 07:28:53.609218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.495 [2024-11-27 07:28:53.609249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.495 qpair failed and we were unable to recover it. 00:33:42.495 [2024-11-27 07:28:53.609477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.495 [2024-11-27 07:28:53.609505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.495 qpair failed and we were unable to recover it. 00:33:42.495 [2024-11-27 07:28:53.609867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.495 [2024-11-27 07:28:53.609896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.495 qpair failed and we were unable to recover it. 00:33:42.495 [2024-11-27 07:28:53.610269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.495 [2024-11-27 07:28:53.610302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.495 qpair failed and we were unable to recover it. 00:33:42.495 [2024-11-27 07:28:53.610669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.495 [2024-11-27 07:28:53.610698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.495 qpair failed and we were unable to recover it. 00:33:42.495 [2024-11-27 07:28:53.610948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.495 [2024-11-27 07:28:53.610978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.495 qpair failed and we were unable to recover it. 00:33:42.495 [2024-11-27 07:28:53.611224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.495 [2024-11-27 07:28:53.611260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.495 qpair failed and we were unable to recover it. 00:33:42.495 [2024-11-27 07:28:53.611609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.495 [2024-11-27 07:28:53.611639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.495 qpair failed and we were unable to recover it. 00:33:42.495 [2024-11-27 07:28:53.611986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.495 [2024-11-27 07:28:53.612015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.495 qpair failed and we were unable to recover it. 00:33:42.495 [2024-11-27 07:28:53.612404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.495 [2024-11-27 07:28:53.612436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.495 qpair failed and we were unable to recover it. 00:33:42.495 [2024-11-27 07:28:53.612654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.495 [2024-11-27 07:28:53.612685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.495 qpair failed and we were unable to recover it. 00:33:42.495 [2024-11-27 07:28:53.613066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.495 [2024-11-27 07:28:53.613097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.495 qpair failed and we were unable to recover it. 00:33:42.495 [2024-11-27 07:28:53.613483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.495 [2024-11-27 07:28:53.613515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.495 qpair failed and we were unable to recover it. 00:33:42.495 [2024-11-27 07:28:53.613956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.495 [2024-11-27 07:28:53.613985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.495 qpair failed and we were unable to recover it. 00:33:42.495 [2024-11-27 07:28:53.614322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.495 [2024-11-27 07:28:53.614353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.495 qpair failed and we were unable to recover it. 00:33:42.495 [2024-11-27 07:28:53.614597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.495 [2024-11-27 07:28:53.614629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.495 qpair failed and we were unable to recover it. 00:33:42.495 [2024-11-27 07:28:53.614991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.495 [2024-11-27 07:28:53.615023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.495 qpair failed and we were unable to recover it. 00:33:42.495 [2024-11-27 07:28:53.615256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.495 [2024-11-27 07:28:53.615288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.495 qpair failed and we were unable to recover it. 00:33:42.495 [2024-11-27 07:28:53.615672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.495 [2024-11-27 07:28:53.615702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.495 qpair failed and we were unable to recover it. 00:33:42.495 [2024-11-27 07:28:53.616034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.495 [2024-11-27 07:28:53.616065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.495 qpair failed and we were unable to recover it. 00:33:42.495 [2024-11-27 07:28:53.616416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.495 [2024-11-27 07:28:53.616445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.495 qpair failed and we were unable to recover it. 00:33:42.495 [2024-11-27 07:28:53.616817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.495 [2024-11-27 07:28:53.616848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.495 qpair failed and we were unable to recover it. 00:33:42.495 [2024-11-27 07:28:53.616949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.495 [2024-11-27 07:28:53.616979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.495 qpair failed and we were unable to recover it. 00:33:42.495 [2024-11-27 07:28:53.617267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.495 [2024-11-27 07:28:53.617298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.495 qpair failed and we were unable to recover it. 00:33:42.495 [2024-11-27 07:28:53.617683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.495 [2024-11-27 07:28:53.617714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.495 qpair failed and we were unable to recover it. 00:33:42.495 [2024-11-27 07:28:53.618069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.495 [2024-11-27 07:28:53.618099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.495 qpair failed and we were unable to recover it. 00:33:42.495 [2024-11-27 07:28:53.618316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.495 [2024-11-27 07:28:53.618346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.495 qpair failed and we were unable to recover it. 00:33:42.495 [2024-11-27 07:28:53.618725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.495 [2024-11-27 07:28:53.618754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.495 qpair failed and we were unable to recover it. 00:33:42.495 [2024-11-27 07:28:53.618977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.495 [2024-11-27 07:28:53.619007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.495 qpair failed and we were unable to recover it. 00:33:42.495 [2024-11-27 07:28:53.619352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.496 [2024-11-27 07:28:53.619384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.496 qpair failed and we were unable to recover it. 00:33:42.496 [2024-11-27 07:28:53.619746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.496 [2024-11-27 07:28:53.619775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.496 qpair failed and we were unable to recover it. 00:33:42.496 [2024-11-27 07:28:53.620154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.496 [2024-11-27 07:28:53.620196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.496 qpair failed and we were unable to recover it. 00:33:42.496 [2024-11-27 07:28:53.620407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.496 [2024-11-27 07:28:53.620438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.496 qpair failed and we were unable to recover it. 00:33:42.496 [2024-11-27 07:28:53.620794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.496 [2024-11-27 07:28:53.620831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.496 qpair failed and we were unable to recover it. 00:33:42.496 [2024-11-27 07:28:53.621201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.496 [2024-11-27 07:28:53.621232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.496 qpair failed and we were unable to recover it. 00:33:42.496 [2024-11-27 07:28:53.621622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.496 [2024-11-27 07:28:53.621652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.496 qpair failed and we were unable to recover it. 00:33:42.496 [2024-11-27 07:28:53.621998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.496 [2024-11-27 07:28:53.622029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.496 qpair failed and we were unable to recover it. 00:33:42.496 [2024-11-27 07:28:53.622312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.496 [2024-11-27 07:28:53.622343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.496 qpair failed and we were unable to recover it. 00:33:42.496 [2024-11-27 07:28:53.622707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.496 [2024-11-27 07:28:53.622736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.496 qpair failed and we were unable to recover it. 00:33:42.496 [2024-11-27 07:28:53.623079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.496 [2024-11-27 07:28:53.623109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.496 qpair failed and we were unable to recover it. 00:33:42.496 [2024-11-27 07:28:53.623361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.496 [2024-11-27 07:28:53.623393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.496 qpair failed and we were unable to recover it. 00:33:42.496 [2024-11-27 07:28:53.623772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.496 [2024-11-27 07:28:53.623803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.496 qpair failed and we were unable to recover it. 00:33:42.496 [2024-11-27 07:28:53.624173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.496 [2024-11-27 07:28:53.624204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.496 qpair failed and we were unable to recover it. 00:33:42.496 [2024-11-27 07:28:53.624415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.496 [2024-11-27 07:28:53.624444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.496 qpair failed and we were unable to recover it. 00:33:42.496 [2024-11-27 07:28:53.624816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.496 [2024-11-27 07:28:53.624846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.496 qpair failed and we were unable to recover it. 00:33:42.496 [2024-11-27 07:28:53.625215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.496 [2024-11-27 07:28:53.625247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.496 qpair failed and we were unable to recover it. 00:33:42.496 [2024-11-27 07:28:53.625612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.496 [2024-11-27 07:28:53.625642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.496 qpair failed and we were unable to recover it. 00:33:42.496 [2024-11-27 07:28:53.626020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.496 [2024-11-27 07:28:53.626052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.496 qpair failed and we were unable to recover it. 00:33:42.496 [2024-11-27 07:28:53.626427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.496 [2024-11-27 07:28:53.626458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.496 qpair failed and we were unable to recover it. 00:33:42.496 [2024-11-27 07:28:53.626821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.496 [2024-11-27 07:28:53.626853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.496 qpair failed and we were unable to recover it. 00:33:42.496 [2024-11-27 07:28:53.627092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.496 [2024-11-27 07:28:53.627121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.496 qpair failed and we were unable to recover it. 00:33:42.496 [2024-11-27 07:28:53.627544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.496 [2024-11-27 07:28:53.627576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.496 qpair failed and we were unable to recover it. 00:33:42.496 [2024-11-27 07:28:53.627951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.496 [2024-11-27 07:28:53.627981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.496 qpair failed and we were unable to recover it. 00:33:42.496 [2024-11-27 07:28:53.628347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.496 [2024-11-27 07:28:53.628377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.496 qpair failed and we were unable to recover it. 00:33:42.496 [2024-11-27 07:28:53.628617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.496 [2024-11-27 07:28:53.628648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.496 qpair failed and we were unable to recover it. 00:33:42.496 [2024-11-27 07:28:53.629014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.496 [2024-11-27 07:28:53.629043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.496 qpair failed and we were unable to recover it. 00:33:42.496 [2024-11-27 07:28:53.629420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.496 [2024-11-27 07:28:53.629452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.496 qpair failed and we were unable to recover it. 00:33:42.496 [2024-11-27 07:28:53.629816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.496 [2024-11-27 07:28:53.629844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.496 qpair failed and we were unable to recover it. 00:33:42.496 [2024-11-27 07:28:53.630221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.496 [2024-11-27 07:28:53.630252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.496 qpair failed and we were unable to recover it. 00:33:42.496 [2024-11-27 07:28:53.630637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.496 [2024-11-27 07:28:53.630675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.496 qpair failed and we were unable to recover it. 00:33:42.496 [2024-11-27 07:28:53.631055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.496 [2024-11-27 07:28:53.631096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.496 qpair failed and we were unable to recover it. 00:33:42.496 [2024-11-27 07:28:53.631315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.496 [2024-11-27 07:28:53.631346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.496 qpair failed and we were unable to recover it. 00:33:42.496 [2024-11-27 07:28:53.631607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.496 [2024-11-27 07:28:53.631637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.496 qpair failed and we were unable to recover it. 00:33:42.496 [2024-11-27 07:28:53.632032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.496 [2024-11-27 07:28:53.632060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.496 qpair failed and we were unable to recover it. 00:33:42.496 [2024-11-27 07:28:53.632416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.632446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.497 [2024-11-27 07:28:53.632700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.632728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.497 [2024-11-27 07:28:53.632936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.632964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.497 [2024-11-27 07:28:53.633334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.633365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.497 [2024-11-27 07:28:53.633733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.633762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.497 [2024-11-27 07:28:53.634129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.634182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.497 [2024-11-27 07:28:53.634549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.634578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.497 [2024-11-27 07:28:53.634912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.634942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.497 [2024-11-27 07:28:53.635304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.635335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.497 [2024-11-27 07:28:53.635708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.635737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.497 [2024-11-27 07:28:53.635976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.636004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.497 [2024-11-27 07:28:53.636389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.636419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.497 [2024-11-27 07:28:53.636763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.636792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.497 [2024-11-27 07:28:53.637171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.637202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.497 [2024-11-27 07:28:53.637533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.637563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.497 [2024-11-27 07:28:53.637945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.637973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.497 [2024-11-27 07:28:53.638341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.638372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.497 [2024-11-27 07:28:53.638468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.638495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.497 [2024-11-27 07:28:53.638661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.638690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.497 [2024-11-27 07:28:53.639041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.639069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.497 [2024-11-27 07:28:53.639302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.639331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.497 [2024-11-27 07:28:53.639782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.639813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.497 [2024-11-27 07:28:53.640175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.640205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.497 [2024-11-27 07:28:53.640442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.640470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.497 [2024-11-27 07:28:53.640663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.640692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.497 [2024-11-27 07:28:53.641071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.641100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.497 [2024-11-27 07:28:53.641477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.641507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.497 [2024-11-27 07:28:53.641830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.641859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.497 [2024-11-27 07:28:53.642224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.642255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.497 [2024-11-27 07:28:53.642626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.642657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.497 [2024-11-27 07:28:53.643025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.643053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.497 [2024-11-27 07:28:53.643330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.643359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.497 [2024-11-27 07:28:53.643699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.643727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.497 [2024-11-27 07:28:53.644101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.644131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.497 [2024-11-27 07:28:53.644515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.644547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.497 [2024-11-27 07:28:53.644927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.644955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.497 [2024-11-27 07:28:53.645325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.645355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.497 [2024-11-27 07:28:53.645713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.645742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.497 [2024-11-27 07:28:53.645990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.497 [2024-11-27 07:28:53.646019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.497 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.646252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.498 [2024-11-27 07:28:53.646284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.498 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.646681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.498 [2024-11-27 07:28:53.646710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.498 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.646935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.498 [2024-11-27 07:28:53.646963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.498 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.647306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.498 [2024-11-27 07:28:53.647335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.498 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.647561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.498 [2024-11-27 07:28:53.647589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.498 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.647931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.498 [2024-11-27 07:28:53.647960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.498 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.648333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.498 [2024-11-27 07:28:53.648364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.498 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.648740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.498 [2024-11-27 07:28:53.648771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.498 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.649155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.498 [2024-11-27 07:28:53.649197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.498 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.649436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.498 [2024-11-27 07:28:53.649464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.498 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.649819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.498 [2024-11-27 07:28:53.649847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.498 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.649946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.498 [2024-11-27 07:28:53.649973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18520c0 with addr=10.0.0.2, port=4420 00:33:42.498 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.650419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.498 [2024-11-27 07:28:53.650541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.498 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.651004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.498 [2024-11-27 07:28:53.651042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.498 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.651483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.498 [2024-11-27 07:28:53.651588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.498 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.652048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.498 [2024-11-27 07:28:53.652084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.498 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.652559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.498 [2024-11-27 07:28:53.652662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.498 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.653084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.498 [2024-11-27 07:28:53.653123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.498 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.653549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.498 [2024-11-27 07:28:53.653581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.498 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.653916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.498 [2024-11-27 07:28:53.653947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.498 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.654409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.498 [2024-11-27 07:28:53.654513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.498 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.654798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.498 [2024-11-27 07:28:53.654836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.498 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.655234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.498 [2024-11-27 07:28:53.655268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.498 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.655609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.498 [2024-11-27 07:28:53.655639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.498 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.656025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.498 [2024-11-27 07:28:53.656054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.498 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.656197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.498 [2024-11-27 07:28:53.656228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.498 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.656649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.498 [2024-11-27 07:28:53.656679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.498 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.656974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.498 [2024-11-27 07:28:53.657004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.498 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.657424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.498 [2024-11-27 07:28:53.657455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.498 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.657790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.498 [2024-11-27 07:28:53.657821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.498 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.658063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.498 [2024-11-27 07:28:53.658092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.498 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.658360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.498 [2024-11-27 07:28:53.658394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.498 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.658767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.498 [2024-11-27 07:28:53.658797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.498 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.659197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.498 [2024-11-27 07:28:53.659227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.498 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.659554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.498 [2024-11-27 07:28:53.659583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.498 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.659984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.498 [2024-11-27 07:28:53.660015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.498 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.660229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.498 [2024-11-27 07:28:53.660259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.498 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.660477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.498 [2024-11-27 07:28:53.660506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.498 qpair failed and we were unable to recover it. 00:33:42.498 [2024-11-27 07:28:53.660755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.499 [2024-11-27 07:28:53.660791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.499 qpair failed and we were unable to recover it. 00:33:42.499 [2024-11-27 07:28:53.661192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.499 [2024-11-27 07:28:53.661222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.499 qpair failed and we were unable to recover it. 00:33:42.499 [2024-11-27 07:28:53.661461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.499 [2024-11-27 07:28:53.661490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.499 qpair failed and we were unable to recover it. 00:33:42.499 [2024-11-27 07:28:53.661788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.499 [2024-11-27 07:28:53.661816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.499 qpair failed and we were unable to recover it. 00:33:42.499 [2024-11-27 07:28:53.662055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.499 [2024-11-27 07:28:53.662083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.499 qpair failed and we were unable to recover it. 00:33:42.499 [2024-11-27 07:28:53.662357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.499 [2024-11-27 07:28:53.662387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.499 qpair failed and we were unable to recover it. 00:33:42.499 [2024-11-27 07:28:53.662745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.499 [2024-11-27 07:28:53.662774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.499 qpair failed and we were unable to recover it. 00:33:42.499 [2024-11-27 07:28:53.663038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.499 [2024-11-27 07:28:53.663074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.499 qpair failed and we were unable to recover it. 00:33:42.499 [2024-11-27 07:28:53.663340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.499 [2024-11-27 07:28:53.663370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.499 qpair failed and we were unable to recover it. 00:33:42.499 [2024-11-27 07:28:53.663642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.499 [2024-11-27 07:28:53.663673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.499 qpair failed and we were unable to recover it. 00:33:42.499 [2024-11-27 07:28:53.664004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.499 [2024-11-27 07:28:53.664035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.499 qpair failed and we were unable to recover it. 00:33:42.499 [2024-11-27 07:28:53.664422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.499 [2024-11-27 07:28:53.664453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.499 qpair failed and we were unable to recover it. 00:33:42.499 [2024-11-27 07:28:53.664668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.499 [2024-11-27 07:28:53.664696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.499 qpair failed and we were unable to recover it. 00:33:42.499 [2024-11-27 07:28:53.665073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.499 [2024-11-27 07:28:53.665102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.499 qpair failed and we were unable to recover it. 00:33:42.499 [2024-11-27 07:28:53.665483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.499 [2024-11-27 07:28:53.665515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.499 qpair failed and we were unable to recover it. 00:33:42.499 [2024-11-27 07:28:53.665758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.499 [2024-11-27 07:28:53.665787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.499 qpair failed and we were unable to recover it. 00:33:42.499 [2024-11-27 07:28:53.666189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.499 [2024-11-27 07:28:53.666219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.499 qpair failed and we were unable to recover it. 00:33:42.499 [2024-11-27 07:28:53.666581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.499 [2024-11-27 07:28:53.666610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.499 qpair failed and we were unable to recover it. 00:33:42.499 [2024-11-27 07:28:53.666995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.499 [2024-11-27 07:28:53.667024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.499 qpair failed and we were unable to recover it. 00:33:42.499 [2024-11-27 07:28:53.667286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.499 [2024-11-27 07:28:53.667315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.499 qpair failed and we were unable to recover it. 00:33:42.499 [2024-11-27 07:28:53.667598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.499 [2024-11-27 07:28:53.667627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.499 qpair failed and we were unable to recover it. 00:33:42.499 [2024-11-27 07:28:53.667871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.499 [2024-11-27 07:28:53.667903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.499 qpair failed and we were unable to recover it. 00:33:42.767 [2024-11-27 07:28:53.668253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.767 [2024-11-27 07:28:53.668289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.767 qpair failed and we were unable to recover it. 00:33:42.767 [2024-11-27 07:28:53.668641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.767 [2024-11-27 07:28:53.668672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.767 qpair failed and we were unable to recover it. 00:33:42.767 [2024-11-27 07:28:53.669032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.767 [2024-11-27 07:28:53.669061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.767 qpair failed and we were unable to recover it. 00:33:42.767 [2024-11-27 07:28:53.669404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.767 [2024-11-27 07:28:53.669434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.767 qpair failed and we were unable to recover it. 00:33:42.767 07:28:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:42.767 [2024-11-27 07:28:53.669771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.767 [2024-11-27 07:28:53.669802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.767 qpair failed and we were unable to recover it. 00:33:42.767 07:28:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:33:42.767 [2024-11-27 07:28:53.670192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.767 [2024-11-27 07:28:53.670227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.767 qpair failed and we were unable to recover it. 00:33:42.767 07:28:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:42.767 [2024-11-27 07:28:53.670442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.767 [2024-11-27 07:28:53.670472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.767 qpair failed and we were unable to recover it. 00:33:42.767 07:28:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:42.767 07:28:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:42.767 [2024-11-27 07:28:53.670845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.767 [2024-11-27 07:28:53.670876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.767 qpair failed and we were unable to recover it. 00:33:42.767 [2024-11-27 07:28:53.671240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.767 [2024-11-27 07:28:53.671273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.767 qpair failed and we were unable to recover it. 00:33:42.767 [2024-11-27 07:28:53.671558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.767 [2024-11-27 07:28:53.671589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.767 qpair failed and we were unable to recover it. 00:33:42.767 [2024-11-27 07:28:53.671947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.767 [2024-11-27 07:28:53.671977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.767 qpair failed and we were unable to recover it. 00:33:42.767 [2024-11-27 07:28:53.672220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.767 [2024-11-27 07:28:53.672250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.767 qpair failed and we were unable to recover it. 00:33:42.767 [2024-11-27 07:28:53.672586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.767 [2024-11-27 07:28:53.672615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.767 qpair failed and we were unable to recover it. 00:33:42.768 [2024-11-27 07:28:53.672871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.768 [2024-11-27 07:28:53.672899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.768 qpair failed and we were unable to recover it. 00:33:42.768 [2024-11-27 07:28:53.673234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.768 [2024-11-27 07:28:53.673265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.768 qpair failed and we were unable to recover it. 00:33:42.768 [2024-11-27 07:28:53.673492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.768 [2024-11-27 07:28:53.673524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.768 qpair failed and we were unable to recover it. 00:33:42.768 [2024-11-27 07:28:53.673840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.768 [2024-11-27 07:28:53.673878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.768 qpair failed and we were unable to recover it. 00:33:42.768 [2024-11-27 07:28:53.674246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.768 [2024-11-27 07:28:53.674280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.768 qpair failed and we were unable to recover it. 00:33:42.768 [2024-11-27 07:28:53.674646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.768 [2024-11-27 07:28:53.674676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.768 qpair failed and we were unable to recover it. 00:33:42.768 [2024-11-27 07:28:53.674777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.768 [2024-11-27 07:28:53.674807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.768 qpair failed and we were unable to recover it. 00:33:42.768 [2024-11-27 07:28:53.675216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.768 [2024-11-27 07:28:53.675247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.768 qpair failed and we were unable to recover it. 00:33:42.768 [2024-11-27 07:28:53.675600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.768 [2024-11-27 07:28:53.675633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.768 qpair failed and we were unable to recover it. 00:33:42.768 [2024-11-27 07:28:53.675994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.768 [2024-11-27 07:28:53.676022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.768 qpair failed and we were unable to recover it. 00:33:42.768 [2024-11-27 07:28:53.676384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.768 [2024-11-27 07:28:53.676416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.768 qpair failed and we were unable to recover it. 00:33:42.768 [2024-11-27 07:28:53.676758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.768 [2024-11-27 07:28:53.676788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.768 qpair failed and we were unable to recover it. 00:33:42.768 [2024-11-27 07:28:53.677149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.768 [2024-11-27 07:28:53.677187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.768 qpair failed and we were unable to recover it. 00:33:42.768 [2024-11-27 07:28:53.677549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.768 [2024-11-27 07:28:53.677579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.768 qpair failed and we were unable to recover it. 00:33:42.768 [2024-11-27 07:28:53.677896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.768 [2024-11-27 07:28:53.677926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.768 qpair failed and we were unable to recover it. 00:33:42.768 [2024-11-27 07:28:53.678323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.768 [2024-11-27 07:28:53.678353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.768 qpair failed and we were unable to recover it. 00:33:42.768 [2024-11-27 07:28:53.678587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.768 [2024-11-27 07:28:53.678616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.768 qpair failed and we were unable to recover it. 00:33:42.768 [2024-11-27 07:28:53.679006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.768 [2024-11-27 07:28:53.679036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.768 qpair failed and we were unable to recover it. 00:33:42.768 [2024-11-27 07:28:53.679372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.768 [2024-11-27 07:28:53.679412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.768 qpair failed and we were unable to recover it. 00:33:42.768 [2024-11-27 07:28:53.679773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.768 [2024-11-27 07:28:53.679804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.768 qpair failed and we were unable to recover it. 00:33:42.768 [2024-11-27 07:28:53.680182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.768 [2024-11-27 07:28:53.680214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.768 qpair failed and we were unable to recover it. 00:33:42.768 [2024-11-27 07:28:53.680582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.768 [2024-11-27 07:28:53.680611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.768 qpair failed and we were unable to recover it. 00:33:42.768 [2024-11-27 07:28:53.680986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.768 [2024-11-27 07:28:53.681017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.768 qpair failed and we were unable to recover it. 00:33:42.768 [2024-11-27 07:28:53.681450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.768 [2024-11-27 07:28:53.681481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.768 qpair failed and we were unable to recover it. 00:33:42.768 [2024-11-27 07:28:53.681831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.768 [2024-11-27 07:28:53.681859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.768 qpair failed and we were unable to recover it. 00:33:42.768 [2024-11-27 07:28:53.682194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.768 [2024-11-27 07:28:53.682223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.768 qpair failed and we were unable to recover it. 00:33:42.768 [2024-11-27 07:28:53.682456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.768 [2024-11-27 07:28:53.682484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.768 qpair failed and we were unable to recover it. 00:33:42.768 [2024-11-27 07:28:53.682833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.768 [2024-11-27 07:28:53.682861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.768 qpair failed and we were unable to recover it. 00:33:42.768 [2024-11-27 07:28:53.683115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.768 [2024-11-27 07:28:53.683142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.768 qpair failed and we were unable to recover it. 00:33:42.768 [2024-11-27 07:28:53.683500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.768 [2024-11-27 07:28:53.683528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.768 qpair failed and we were unable to recover it. 00:33:42.768 [2024-11-27 07:28:53.683891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.768 [2024-11-27 07:28:53.683921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.768 qpair failed and we were unable to recover it. 00:33:42.768 [2024-11-27 07:28:53.684135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.768 [2024-11-27 07:28:53.684177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.768 qpair failed and we were unable to recover it. 00:33:42.768 [2024-11-27 07:28:53.684537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.768 [2024-11-27 07:28:53.684566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.768 qpair failed and we were unable to recover it. 00:33:42.768 [2024-11-27 07:28:53.684915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.768 [2024-11-27 07:28:53.684949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.768 qpair failed and we were unable to recover it. 00:33:42.768 [2024-11-27 07:28:53.685245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.768 [2024-11-27 07:28:53.685276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.768 qpair failed and we were unable to recover it. 00:33:42.768 [2024-11-27 07:28:53.685652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.768 [2024-11-27 07:28:53.685682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.768 qpair failed and we were unable to recover it. 00:33:42.769 [2024-11-27 07:28:53.686040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.769 [2024-11-27 07:28:53.686069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.769 qpair failed and we were unable to recover it. 00:33:42.769 [2024-11-27 07:28:53.686330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.769 [2024-11-27 07:28:53.686361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.769 qpair failed and we were unable to recover it. 00:33:42.769 [2024-11-27 07:28:53.686712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.769 [2024-11-27 07:28:53.686740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.769 qpair failed and we were unable to recover it. 00:33:42.769 [2024-11-27 07:28:53.687122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.769 [2024-11-27 07:28:53.687151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.769 qpair failed and we were unable to recover it. 00:33:42.769 [2024-11-27 07:28:53.687570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.769 [2024-11-27 07:28:53.687601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.769 qpair failed and we were unable to recover it. 00:33:42.769 [2024-11-27 07:28:53.687909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.769 [2024-11-27 07:28:53.687938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.769 qpair failed and we were unable to recover it. 00:33:42.769 [2024-11-27 07:28:53.688182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.769 [2024-11-27 07:28:53.688214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.769 qpair failed and we were unable to recover it. 00:33:42.769 [2024-11-27 07:28:53.688616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.769 [2024-11-27 07:28:53.688652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.769 qpair failed and we were unable to recover it. 00:33:42.769 [2024-11-27 07:28:53.688972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.769 [2024-11-27 07:28:53.689001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.769 qpair failed and we were unable to recover it. 00:33:42.769 [2024-11-27 07:28:53.689419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.769 [2024-11-27 07:28:53.689451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.769 qpair failed and we were unable to recover it. 00:33:42.769 [2024-11-27 07:28:53.689810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.769 [2024-11-27 07:28:53.689839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.769 qpair failed and we were unable to recover it. 00:33:42.769 [2024-11-27 07:28:53.690120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.769 [2024-11-27 07:28:53.690150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.769 qpair failed and we were unable to recover it. 00:33:42.769 [2024-11-27 07:28:53.690380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.769 [2024-11-27 07:28:53.690410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.769 qpair failed and we were unable to recover it. 00:33:42.769 [2024-11-27 07:28:53.690768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.769 [2024-11-27 07:28:53.690796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.769 qpair failed and we were unable to recover it. 00:33:42.769 [2024-11-27 07:28:53.691148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.769 [2024-11-27 07:28:53.691186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.769 qpair failed and we were unable to recover it. 00:33:42.769 [2024-11-27 07:28:53.691524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.769 [2024-11-27 07:28:53.691553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.769 qpair failed and we were unable to recover it. 00:33:42.769 [2024-11-27 07:28:53.691915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.769 [2024-11-27 07:28:53.691944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.769 qpair failed and we were unable to recover it. 00:33:42.769 [2024-11-27 07:28:53.692296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.769 [2024-11-27 07:28:53.692326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.769 qpair failed and we were unable to recover it. 00:33:42.769 [2024-11-27 07:28:53.692694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.769 [2024-11-27 07:28:53.692723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.769 qpair failed and we were unable to recover it. 00:33:42.769 [2024-11-27 07:28:53.693061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.769 [2024-11-27 07:28:53.693090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.769 qpair failed and we were unable to recover it. 00:33:42.769 [2024-11-27 07:28:53.693459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.769 [2024-11-27 07:28:53.693489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.769 qpair failed and we were unable to recover it. 00:33:42.769 [2024-11-27 07:28:53.693845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.769 [2024-11-27 07:28:53.693875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.769 qpair failed and we were unable to recover it. 00:33:42.769 [2024-11-27 07:28:53.694250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.769 [2024-11-27 07:28:53.694280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.769 qpair failed and we were unable to recover it. 00:33:42.769 [2024-11-27 07:28:53.694517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.769 [2024-11-27 07:28:53.694546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.769 qpair failed and we were unable to recover it. 00:33:42.769 [2024-11-27 07:28:53.694914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.769 [2024-11-27 07:28:53.694942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.769 qpair failed and we were unable to recover it. 00:33:42.769 [2024-11-27 07:28:53.695152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.769 [2024-11-27 07:28:53.695192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.769 qpair failed and we were unable to recover it. 00:33:42.769 [2024-11-27 07:28:53.695528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.769 [2024-11-27 07:28:53.695558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.769 qpair failed and we were unable to recover it. 00:33:42.769 [2024-11-27 07:28:53.695923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.769 [2024-11-27 07:28:53.695953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.769 qpair failed and we were unable to recover it. 00:33:42.769 [2024-11-27 07:28:53.696319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.769 [2024-11-27 07:28:53.696351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.769 qpair failed and we were unable to recover it. 00:33:42.769 [2024-11-27 07:28:53.696686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.769 [2024-11-27 07:28:53.696716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.769 qpair failed and we were unable to recover it. 00:33:42.769 [2024-11-27 07:28:53.697084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.769 [2024-11-27 07:28:53.697114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.769 qpair failed and we were unable to recover it. 00:33:42.769 [2024-11-27 07:28:53.697495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.769 [2024-11-27 07:28:53.697525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.769 qpair failed and we were unable to recover it. 00:33:42.769 [2024-11-27 07:28:53.697753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.769 [2024-11-27 07:28:53.697781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.769 qpair failed and we were unable to recover it. 00:33:42.769 [2024-11-27 07:28:53.698066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.769 [2024-11-27 07:28:53.698095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.770 qpair failed and we were unable to recover it. 00:33:42.770 [2024-11-27 07:28:53.698498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.770 [2024-11-27 07:28:53.698530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.770 qpair failed and we were unable to recover it. 00:33:42.770 [2024-11-27 07:28:53.698875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.770 [2024-11-27 07:28:53.698905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.770 qpair failed and we were unable to recover it. 00:33:42.770 [2024-11-27 07:28:53.699266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.770 [2024-11-27 07:28:53.699298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.770 qpair failed and we were unable to recover it. 00:33:42.770 [2024-11-27 07:28:53.699673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.770 [2024-11-27 07:28:53.699705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.770 qpair failed and we were unable to recover it. 00:33:42.770 [2024-11-27 07:28:53.699939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.770 [2024-11-27 07:28:53.699969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.770 qpair failed and we were unable to recover it. 00:33:42.770 [2024-11-27 07:28:53.700342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.770 [2024-11-27 07:28:53.700372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.770 qpair failed and we were unable to recover it. 00:33:42.770 [2024-11-27 07:28:53.700748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.770 [2024-11-27 07:28:53.700775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.770 qpair failed and we were unable to recover it. 00:33:42.770 [2024-11-27 07:28:53.701142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.770 [2024-11-27 07:28:53.701180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.770 qpair failed and we were unable to recover it. 00:33:42.770 [2024-11-27 07:28:53.701405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.770 [2024-11-27 07:28:53.701434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.770 qpair failed and we were unable to recover it. 00:33:42.770 [2024-11-27 07:28:53.701814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.770 [2024-11-27 07:28:53.701843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.770 qpair failed and we were unable to recover it. 00:33:42.770 [2024-11-27 07:28:53.702217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.770 [2024-11-27 07:28:53.702247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.770 qpair failed and we were unable to recover it. 00:33:42.770 [2024-11-27 07:28:53.702628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.770 [2024-11-27 07:28:53.702658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.770 qpair failed and we were unable to recover it. 00:33:42.770 [2024-11-27 07:28:53.703021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.770 [2024-11-27 07:28:53.703049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.770 qpair failed and we were unable to recover it. 00:33:42.770 [2024-11-27 07:28:53.703280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.770 [2024-11-27 07:28:53.703316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.770 qpair failed and we were unable to recover it. 00:33:42.770 [2024-11-27 07:28:53.703671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.770 [2024-11-27 07:28:53.703703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.770 qpair failed and we were unable to recover it. 00:33:42.770 [2024-11-27 07:28:53.704070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.770 [2024-11-27 07:28:53.704100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.770 qpair failed and we were unable to recover it. 00:33:42.770 [2024-11-27 07:28:53.704462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.770 [2024-11-27 07:28:53.704493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.770 qpair failed and we were unable to recover it. 00:33:42.770 [2024-11-27 07:28:53.704852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.770 [2024-11-27 07:28:53.704882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.770 qpair failed and we were unable to recover it. 00:33:42.770 [2024-11-27 07:28:53.705247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.770 [2024-11-27 07:28:53.705277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.770 qpair failed and we were unable to recover it. 00:33:42.770 [2024-11-27 07:28:53.705605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.770 [2024-11-27 07:28:53.705635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.770 qpair failed and we were unable to recover it. 00:33:42.770 [2024-11-27 07:28:53.706000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.770 [2024-11-27 07:28:53.706030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.770 qpair failed and we were unable to recover it. 00:33:42.770 [2024-11-27 07:28:53.706403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.770 [2024-11-27 07:28:53.706433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.770 qpair failed and we were unable to recover it. 00:33:42.770 [2024-11-27 07:28:53.706700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.770 [2024-11-27 07:28:53.706732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.770 qpair failed and we were unable to recover it. 00:33:42.770 [2024-11-27 07:28:53.707072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.770 [2024-11-27 07:28:53.707100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.770 qpair failed and we were unable to recover it. 00:33:42.770 [2024-11-27 07:28:53.707468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.770 [2024-11-27 07:28:53.707500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.770 qpair failed and we were unable to recover it. 00:33:42.770 [2024-11-27 07:28:53.707880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.770 [2024-11-27 07:28:53.707911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.770 qpair failed and we were unable to recover it. 00:33:42.770 [2024-11-27 07:28:53.708281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.770 [2024-11-27 07:28:53.708312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.770 qpair failed and we were unable to recover it. 00:33:42.770 [2024-11-27 07:28:53.708678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.770 [2024-11-27 07:28:53.708708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.770 qpair failed and we were unable to recover it. 00:33:42.770 [2024-11-27 07:28:53.709067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.770 [2024-11-27 07:28:53.709096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.770 qpair failed and we were unable to recover it. 00:33:42.770 [2024-11-27 07:28:53.709415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.770 [2024-11-27 07:28:53.709445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.770 qpair failed and we were unable to recover it. 00:33:42.770 [2024-11-27 07:28:53.709806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.770 [2024-11-27 07:28:53.709835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.770 qpair failed and we were unable to recover it. 00:33:42.770 [2024-11-27 07:28:53.710067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.770 [2024-11-27 07:28:53.710095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.770 qpair failed and we were unable to recover it. 00:33:42.770 [2024-11-27 07:28:53.710493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.770 [2024-11-27 07:28:53.710525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.770 qpair failed and we were unable to recover it. 00:33:42.770 [2024-11-27 07:28:53.710870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.770 [2024-11-27 07:28:53.710899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.770 qpair failed and we were unable to recover it. 00:33:42.770 [2024-11-27 07:28:53.711153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.770 [2024-11-27 07:28:53.711193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.770 qpair failed and we were unable to recover it. 00:33:42.770 [2024-11-27 07:28:53.711567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.770 [2024-11-27 07:28:53.711598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.770 qpair failed and we were unable to recover it. 00:33:42.770 [2024-11-27 07:28:53.711831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.770 [2024-11-27 07:28:53.711859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.770 qpair failed and we were unable to recover it. 00:33:42.770 [2024-11-27 07:28:53.712224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.771 [2024-11-27 07:28:53.712256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.771 qpair failed and we were unable to recover it. 00:33:42.771 [2024-11-27 07:28:53.712601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.771 [2024-11-27 07:28:53.712631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.771 qpair failed and we were unable to recover it. 00:33:42.771 [2024-11-27 07:28:53.712872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.771 [2024-11-27 07:28:53.712902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.771 qpair failed and we were unable to recover it. 00:33:42.771 [2024-11-27 07:28:53.713178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.771 [2024-11-27 07:28:53.713210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b9 07:28:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:42.771 0 with addr=10.0.0.2, port=4420 00:33:42.771 qpair failed and we were unable to recover it. 00:33:42.771 [2024-11-27 07:28:53.713600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.771 07:28:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:42.771 [2024-11-27 07:28:53.713631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.771 qpair failed and we were unable to recover it. 00:33:42.771 07:28:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.771 [2024-11-27 07:28:53.713990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.771 [2024-11-27 07:28:53.714021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.771 qpair failed and we were unable to recover it. 00:33:42.771 07:28:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:42.771 [2024-11-27 07:28:53.714354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.771 [2024-11-27 07:28:53.714385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.771 qpair failed and we were unable to recover it. 00:33:42.771 [2024-11-27 07:28:53.714573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.771 [2024-11-27 07:28:53.714602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.771 qpair failed and we were unable to recover it. 00:33:42.771 [2024-11-27 07:28:53.714956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.771 [2024-11-27 07:28:53.714987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.771 qpair failed and we were unable to recover it. 00:33:42.771 [2024-11-27 07:28:53.715322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.771 [2024-11-27 07:28:53.715354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.771 qpair failed and we were unable to recover it. 00:33:42.771 [2024-11-27 07:28:53.715573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.771 [2024-11-27 07:28:53.715601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.771 qpair failed and we were unable to recover it. 00:33:42.771 [2024-11-27 07:28:53.715809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.771 [2024-11-27 07:28:53.715837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.771 qpair failed and we were unable to recover it. 00:33:42.771 [2024-11-27 07:28:53.716190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.771 [2024-11-27 07:28:53.716220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.771 qpair failed and we were unable to recover it. 00:33:42.771 [2024-11-27 07:28:53.716574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.771 [2024-11-27 07:28:53.716603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.771 qpair failed and we were unable to recover it. 00:33:42.771 [2024-11-27 07:28:53.716960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.771 [2024-11-27 07:28:53.716998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.771 qpair failed and we were unable to recover it. 00:33:42.771 [2024-11-27 07:28:53.717107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.771 [2024-11-27 07:28:53.717137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.771 qpair failed and we were unable to recover it. 00:33:42.771 [2024-11-27 07:28:53.717519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.771 [2024-11-27 07:28:53.717549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.771 qpair failed and we were unable to recover it. 00:33:42.771 [2024-11-27 07:28:53.717896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.771 [2024-11-27 07:28:53.717925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.771 qpair failed and we were unable to recover it. 00:33:42.771 [2024-11-27 07:28:53.718203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.771 [2024-11-27 07:28:53.718231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.771 qpair failed and we were unable to recover it. 00:33:42.771 [2024-11-27 07:28:53.718588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.771 [2024-11-27 07:28:53.718617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.771 qpair failed and we were unable to recover it. 00:33:42.771 [2024-11-27 07:28:53.718971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.771 [2024-11-27 07:28:53.718999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.771 qpair failed and we were unable to recover it. 00:33:42.771 [2024-11-27 07:28:53.719371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.771 [2024-11-27 07:28:53.719399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.771 qpair failed and we were unable to recover it. 00:33:42.771 [2024-11-27 07:28:53.719609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.771 [2024-11-27 07:28:53.719638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.771 qpair failed and we were unable to recover it. 00:33:42.771 [2024-11-27 07:28:53.720005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.771 [2024-11-27 07:28:53.720034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.771 qpair failed and we were unable to recover it. 00:33:42.771 [2024-11-27 07:28:53.720387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.771 [2024-11-27 07:28:53.720417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.771 qpair failed and we were unable to recover it. 00:33:42.771 [2024-11-27 07:28:53.720675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.771 [2024-11-27 07:28:53.720707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.771 qpair failed and we were unable to recover it. 00:33:42.771 [2024-11-27 07:28:53.721047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.771 [2024-11-27 07:28:53.721077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.771 qpair failed and we were unable to recover it. 00:33:42.771 [2024-11-27 07:28:53.721435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.771 [2024-11-27 07:28:53.721465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.771 qpair failed and we were unable to recover it. 00:33:42.771 [2024-11-27 07:28:53.721703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.771 [2024-11-27 07:28:53.721733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.771 qpair failed and we were unable to recover it. 00:33:42.771 [2024-11-27 07:28:53.722105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.771 [2024-11-27 07:28:53.722135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.771 qpair failed and we were unable to recover it. 00:33:42.771 [2024-11-27 07:28:53.722488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.771 [2024-11-27 07:28:53.722519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.771 qpair failed and we were unable to recover it. 00:33:42.771 [2024-11-27 07:28:53.722882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.771 [2024-11-27 07:28:53.722911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.771 qpair failed and we were unable to recover it. 00:33:42.771 [2024-11-27 07:28:53.723278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.771 [2024-11-27 07:28:53.723308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.771 qpair failed and we were unable to recover it. 00:33:42.771 [2024-11-27 07:28:53.723644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.771 [2024-11-27 07:28:53.723675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.771 qpair failed and we were unable to recover it. 00:33:42.771 [2024-11-27 07:28:53.724053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.771 [2024-11-27 07:28:53.724082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.771 qpair failed and we were unable to recover it. 00:33:42.771 [2024-11-27 07:28:53.724465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.771 [2024-11-27 07:28:53.724495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.771 qpair failed and we were unable to recover it. 00:33:42.772 [2024-11-27 07:28:53.724740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.772 [2024-11-27 07:28:53.724770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.772 qpair failed and we were unable to recover it. 00:33:42.772 [2024-11-27 07:28:53.724995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.772 [2024-11-27 07:28:53.725024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.772 qpair failed and we were unable to recover it. 00:33:42.772 [2024-11-27 07:28:53.725313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.772 [2024-11-27 07:28:53.725344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.772 qpair failed and we were unable to recover it. 00:33:42.772 [2024-11-27 07:28:53.725705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.772 [2024-11-27 07:28:53.725737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.772 qpair failed and we were unable to recover it. 00:33:42.772 [2024-11-27 07:28:53.726079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.772 [2024-11-27 07:28:53.726108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.772 qpair failed and we were unable to recover it. 00:33:42.772 [2024-11-27 07:28:53.726489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.772 [2024-11-27 07:28:53.726520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.772 qpair failed and we were unable to recover it. 00:33:42.772 [2024-11-27 07:28:53.726879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.772 [2024-11-27 07:28:53.726908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.772 qpair failed and we were unable to recover it. 00:33:42.772 [2024-11-27 07:28:53.727142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.772 [2024-11-27 07:28:53.727185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.772 qpair failed and we were unable to recover it. 00:33:42.772 [2024-11-27 07:28:53.727526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.772 [2024-11-27 07:28:53.727557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.772 qpair failed and we were unable to recover it. 00:33:42.772 [2024-11-27 07:28:53.727913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.772 [2024-11-27 07:28:53.727942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.772 qpair failed and we were unable to recover it. 00:33:42.772 [2024-11-27 07:28:53.728304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.772 [2024-11-27 07:28:53.728337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.772 qpair failed and we were unable to recover it. 00:33:42.772 [2024-11-27 07:28:53.728581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.772 [2024-11-27 07:28:53.728612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.772 qpair failed and we were unable to recover it. 00:33:42.772 [2024-11-27 07:28:53.728856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.772 [2024-11-27 07:28:53.728885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.772 qpair failed and we were unable to recover it. 00:33:42.772 [2024-11-27 07:28:53.729269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.772 [2024-11-27 07:28:53.729300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.772 qpair failed and we were unable to recover it. 00:33:42.772 [2024-11-27 07:28:53.729682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.772 [2024-11-27 07:28:53.729712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.772 qpair failed and we were unable to recover it. 00:33:42.772 [2024-11-27 07:28:53.730069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.772 [2024-11-27 07:28:53.730098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.772 qpair failed and we were unable to recover it. 00:33:42.772 [2024-11-27 07:28:53.730469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.772 [2024-11-27 07:28:53.730503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.772 qpair failed and we were unable to recover it. 00:33:42.772 [2024-11-27 07:28:53.730863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.772 [2024-11-27 07:28:53.730895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.772 qpair failed and we were unable to recover it. 00:33:42.772 [2024-11-27 07:28:53.731240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.772 [2024-11-27 07:28:53.731277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.772 qpair failed and we were unable to recover it. 00:33:42.772 [2024-11-27 07:28:53.731515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.772 [2024-11-27 07:28:53.731549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.772 qpair failed and we were unable to recover it. 00:33:42.772 [2024-11-27 07:28:53.731904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.772 [2024-11-27 07:28:53.731934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.772 qpair failed and we were unable to recover it. 00:33:42.772 [2024-11-27 07:28:53.732154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.772 [2024-11-27 07:28:53.732195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.772 qpair failed and we were unable to recover it. 00:33:42.772 [2024-11-27 07:28:53.732580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.772 [2024-11-27 07:28:53.732613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.772 qpair failed and we were unable to recover it. 00:33:42.772 [2024-11-27 07:28:53.732830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.772 [2024-11-27 07:28:53.732861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.772 qpair failed and we were unable to recover it. 00:33:42.772 [2024-11-27 07:28:53.733073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.772 [2024-11-27 07:28:53.733102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.772 qpair failed and we were unable to recover it. 00:33:42.772 [2024-11-27 07:28:53.733513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.772 [2024-11-27 07:28:53.733546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.772 qpair failed and we were unable to recover it. 00:33:42.772 [2024-11-27 07:28:53.733904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.772 [2024-11-27 07:28:53.733934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.772 qpair failed and we were unable to recover it. 00:33:42.772 [2024-11-27 07:28:53.734328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.772 [2024-11-27 07:28:53.734359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.772 qpair failed and we were unable to recover it. 00:33:42.772 [2024-11-27 07:28:53.734713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.772 [2024-11-27 07:28:53.734744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.772 qpair failed and we were unable to recover it. 00:33:42.772 [2024-11-27 07:28:53.735108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.772 [2024-11-27 07:28:53.735137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.772 qpair failed and we were unable to recover it. 00:33:42.772 [2024-11-27 07:28:53.735458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.772 [2024-11-27 07:28:53.735490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.772 qpair failed and we were unable to recover it. 00:33:42.772 [2024-11-27 07:28:53.735849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.773 [2024-11-27 07:28:53.735878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.773 qpair failed and we were unable to recover it. 00:33:42.773 [2024-11-27 07:28:53.736095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.773 [2024-11-27 07:28:53.736124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.773 qpair failed and we were unable to recover it. 00:33:42.773 [2024-11-27 07:28:53.736539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.773 [2024-11-27 07:28:53.736572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.773 qpair failed and we were unable to recover it. 00:33:42.773 [2024-11-27 07:28:53.736764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.773 [2024-11-27 07:28:53.736794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.773 qpair failed and we were unable to recover it. 00:33:42.773 [2024-11-27 07:28:53.737122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.773 [2024-11-27 07:28:53.737152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.773 qpair failed and we were unable to recover it. 00:33:42.773 [2024-11-27 07:28:53.737524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.773 [2024-11-27 07:28:53.737555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.773 qpair failed and we were unable to recover it. 00:33:42.773 [2024-11-27 07:28:53.737769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.773 [2024-11-27 07:28:53.737799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.773 qpair failed and we were unable to recover it. 00:33:42.773 [2024-11-27 07:28:53.738183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.773 [2024-11-27 07:28:53.738216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.773 qpair failed and we were unable to recover it. 00:33:42.773 [2024-11-27 07:28:53.738538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.773 [2024-11-27 07:28:53.738568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.773 qpair failed and we were unable to recover it. 00:33:42.773 [2024-11-27 07:28:53.738916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.773 [2024-11-27 07:28:53.738946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.773 qpair failed and we were unable to recover it. 00:33:42.773 [2024-11-27 07:28:53.739169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.773 [2024-11-27 07:28:53.739202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.773 qpair failed and we were unable to recover it. 00:33:42.773 [2024-11-27 07:28:53.739566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.773 [2024-11-27 07:28:53.739598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.773 qpair failed and we were unable to recover it. 00:33:42.773 [2024-11-27 07:28:53.739964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.773 [2024-11-27 07:28:53.739994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.773 qpair failed and we were unable to recover it. 00:33:42.773 [2024-11-27 07:28:53.740237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.773 [2024-11-27 07:28:53.740269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.773 qpair failed and we were unable to recover it. 00:33:42.773 [2024-11-27 07:28:53.740370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.773 [2024-11-27 07:28:53.740400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.773 qpair failed and we were unable to recover it. 00:33:42.773 [2024-11-27 07:28:53.740757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.773 [2024-11-27 07:28:53.740788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.773 qpair failed and we were unable to recover it. 00:33:42.773 [2024-11-27 07:28:53.740994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.773 [2024-11-27 07:28:53.741025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.773 qpair failed and we were unable to recover it. 00:33:42.773 [2024-11-27 07:28:53.741304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.773 [2024-11-27 07:28:53.741336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.773 qpair failed and we were unable to recover it. 00:33:42.773 [2024-11-27 07:28:53.741710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.773 [2024-11-27 07:28:53.741742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.773 qpair failed and we were unable to recover it. 00:33:42.773 [2024-11-27 07:28:53.742097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.773 [2024-11-27 07:28:53.742127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.773 qpair failed and we were unable to recover it. 00:33:42.773 [2024-11-27 07:28:53.742467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.773 [2024-11-27 07:28:53.742499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.773 qpair failed and we were unable to recover it. 00:33:42.773 [2024-11-27 07:28:53.742858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.773 [2024-11-27 07:28:53.742888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.773 qpair failed and we were unable to recover it. 00:33:42.773 [2024-11-27 07:28:53.743236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.773 [2024-11-27 07:28:53.743266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.773 qpair failed and we were unable to recover it. 00:33:42.773 [2024-11-27 07:28:53.743634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.773 [2024-11-27 07:28:53.743664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.773 qpair failed and we were unable to recover it. 00:33:42.773 [2024-11-27 07:28:53.743923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.773 [2024-11-27 07:28:53.743954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.773 qpair failed and we were unable to recover it. 00:33:42.773 [2024-11-27 07:28:53.744304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.773 [2024-11-27 07:28:53.744335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.773 qpair failed and we were unable to recover it. 00:33:42.773 [2024-11-27 07:28:53.744723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.773 [2024-11-27 07:28:53.744755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.773 qpair failed and we were unable to recover it. 00:33:42.773 [2024-11-27 07:28:53.745118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.773 [2024-11-27 07:28:53.745154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.773 qpair failed and we were unable to recover it. 00:33:42.773 [2024-11-27 07:28:53.745541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.773 [2024-11-27 07:28:53.745571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.773 qpair failed and we were unable to recover it. 00:33:42.773 [2024-11-27 07:28:53.745933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.773 [2024-11-27 07:28:53.745965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.773 qpair failed and we were unable to recover it. 00:33:42.773 [2024-11-27 07:28:53.746322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.773 [2024-11-27 07:28:53.746353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.773 qpair failed and we were unable to recover it. 00:33:42.773 [2024-11-27 07:28:53.746728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.773 [2024-11-27 07:28:53.746760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.773 qpair failed and we were unable to recover it. 00:33:42.773 [2024-11-27 07:28:53.746980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.773 [2024-11-27 07:28:53.747009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.773 qpair failed and we were unable to recover it. 00:33:42.773 [2024-11-27 07:28:53.747350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.774 [2024-11-27 07:28:53.747380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.774 qpair failed and we were unable to recover it. 00:33:42.774 [2024-11-27 07:28:53.747725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.774 [2024-11-27 07:28:53.747756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.774 qpair failed and we were unable to recover it. 00:33:42.774 [2024-11-27 07:28:53.748115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.774 [2024-11-27 07:28:53.748143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.774 qpair failed and we were unable to recover it. 00:33:42.774 [2024-11-27 07:28:53.748560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.774 [2024-11-27 07:28:53.748591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.774 qpair failed and we were unable to recover it. 00:33:42.774 [2024-11-27 07:28:53.748734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.774 [2024-11-27 07:28:53.748767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.774 qpair failed and we were unable to recover it. 00:33:42.774 [2024-11-27 07:28:53.749195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.774 [2024-11-27 07:28:53.749239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.774 qpair failed and we were unable to recover it. 00:33:42.774 [2024-11-27 07:28:53.749628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.774 [2024-11-27 07:28:53.749658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.774 qpair failed and we were unable to recover it. 00:33:42.774 [2024-11-27 07:28:53.750033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.774 [2024-11-27 07:28:53.750064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.774 qpair failed and we were unable to recover it. 00:33:42.774 [2024-11-27 07:28:53.750480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.774 [2024-11-27 07:28:53.750510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.774 Malloc0 00:33:42.774 qpair failed and we were unable to recover it. 00:33:42.774 [2024-11-27 07:28:53.750867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.774 [2024-11-27 07:28:53.750898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.774 qpair failed and we were unable to recover it. 00:33:42.774 [2024-11-27 07:28:53.751269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.774 [2024-11-27 07:28:53.751301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.774 qpair failed and we were unable to recover it. 00:33:42.774 07:28:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.774 [2024-11-27 07:28:53.751696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.774 [2024-11-27 07:28:53.751725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.774 qpair failed and we were unable to recover it. 00:33:42.774 07:28:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:42.774 [2024-11-27 07:28:53.752091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.774 [2024-11-27 07:28:53.752121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.774 qpair failed and we were unable to recover it. 00:33:42.774 07:28:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.774 [2024-11-27 07:28:53.752349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.774 [2024-11-27 07:28:53.752380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.774 qpair failed and we were unable to recover it. 00:33:42.774 07:28:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:42.774 [2024-11-27 07:28:53.752592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.774 [2024-11-27 07:28:53.752621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.774 qpair failed and we were unable to recover it. 00:33:42.774 [2024-11-27 07:28:53.752866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.774 [2024-11-27 07:28:53.752899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.774 qpair failed and we were unable to recover it. 00:33:42.774 [2024-11-27 07:28:53.753142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.774 [2024-11-27 07:28:53.753198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.774 qpair failed and we were unable to recover it. 00:33:42.774 [2024-11-27 07:28:53.753555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.774 [2024-11-27 07:28:53.753587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.774 qpair failed and we were unable to recover it. 00:33:42.774 [2024-11-27 07:28:53.753924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.774 [2024-11-27 07:28:53.753960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.774 qpair failed and we were unable to recover it. 00:33:42.774 [2024-11-27 07:28:53.754181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.774 [2024-11-27 07:28:53.754220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.774 qpair failed and we were unable to recover it. 00:33:42.774 [2024-11-27 07:28:53.754521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.774 [2024-11-27 07:28:53.754550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.774 qpair failed and we were unable to recover it. 00:33:42.774 [2024-11-27 07:28:53.754918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.774 [2024-11-27 07:28:53.754948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.774 qpair failed and we were unable to recover it. 00:33:42.774 [2024-11-27 07:28:53.755290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.774 [2024-11-27 07:28:53.755320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.774 qpair failed and we were unable to recover it. 00:33:42.774 [2024-11-27 07:28:53.755713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.774 [2024-11-27 07:28:53.755745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.774 qpair failed and we were unable to recover it. 00:33:42.774 [2024-11-27 07:28:53.756106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.774 [2024-11-27 07:28:53.756134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.774 qpair failed and we were unable to recover it. 00:33:42.774 [2024-11-27 07:28:53.756444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.774 [2024-11-27 07:28:53.756475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.774 qpair failed and we were unable to recover it. 00:33:42.774 [2024-11-27 07:28:53.756858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.774 [2024-11-27 07:28:53.756888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.774 qpair failed and we were unable to recover it. 00:33:42.774 [2024-11-27 07:28:53.757133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.774 [2024-11-27 07:28:53.757169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.774 qpair failed and we were unable to recover it. 00:33:42.774 [2024-11-27 07:28:53.757559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.774 [2024-11-27 07:28:53.757588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.774 qpair failed and we were unable to recover it. 00:33:42.774 [2024-11-27 07:28:53.757657] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:42.774 [2024-11-27 07:28:53.757948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.774 [2024-11-27 07:28:53.757981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.774 qpair failed and we were unable to recover it. 00:33:42.774 [2024-11-27 07:28:53.758342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.774 [2024-11-27 07:28:53.758372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.774 qpair failed and we were unable to recover it. 00:33:42.774 [2024-11-27 07:28:53.758677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.774 [2024-11-27 07:28:53.758714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.774 qpair failed and we were unable to recover it. 00:33:42.774 [2024-11-27 07:28:53.758962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.774 [2024-11-27 07:28:53.758999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.774 qpair failed and we were unable to recover it. 00:33:42.774 [2024-11-27 07:28:53.759409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.774 [2024-11-27 07:28:53.759441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.774 qpair failed and we were unable to recover it. 00:33:42.774 [2024-11-27 07:28:53.759793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.774 [2024-11-27 07:28:53.759825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.774 qpair failed and we were unable to recover it. 00:33:42.774 [2024-11-27 07:28:53.760175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.774 [2024-11-27 07:28:53.760208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.775 qpair failed and we were unable to recover it. 00:33:42.775 [2024-11-27 07:28:53.760495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.775 [2024-11-27 07:28:53.760524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.775 qpair failed and we were unable to recover it. 00:33:42.775 [2024-11-27 07:28:53.760898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.775 [2024-11-27 07:28:53.760927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.775 qpair failed and we were unable to recover it. 00:33:42.775 [2024-11-27 07:28:53.761207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.775 [2024-11-27 07:28:53.761238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.775 qpair failed and we were unable to recover it. 00:33:42.775 [2024-11-27 07:28:53.761592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.775 [2024-11-27 07:28:53.761621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.775 qpair failed and we were unable to recover it. 00:33:42.775 [2024-11-27 07:28:53.761986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.775 [2024-11-27 07:28:53.762017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.775 qpair failed and we were unable to recover it. 00:33:42.775 [2024-11-27 07:28:53.762391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.775 [2024-11-27 07:28:53.762421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.775 qpair failed and we were unable to recover it. 00:33:42.775 [2024-11-27 07:28:53.762858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.775 [2024-11-27 07:28:53.762886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.775 qpair failed and we were unable to recover it. 00:33:42.775 [2024-11-27 07:28:53.763016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.775 [2024-11-27 07:28:53.763044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.775 qpair failed and we were unable to recover it. 00:33:42.775 [2024-11-27 07:28:53.763416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.775 [2024-11-27 07:28:53.763448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.775 qpair failed and we were unable to recover it. 00:33:42.775 [2024-11-27 07:28:53.763706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.775 [2024-11-27 07:28:53.763736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.775 qpair failed and we were unable to recover it. 00:33:42.775 [2024-11-27 07:28:53.764000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.775 [2024-11-27 07:28:53.764030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.775 qpair failed and we were unable to recover it. 00:33:42.775 [2024-11-27 07:28:53.764391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.775 [2024-11-27 07:28:53.764422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.775 qpair failed and we were unable to recover it. 00:33:42.775 [2024-11-27 07:28:53.764674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.775 [2024-11-27 07:28:53.764705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.775 qpair failed and we were unable to recover it. 00:33:42.775 [2024-11-27 07:28:53.765048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.775 [2024-11-27 07:28:53.765080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.775 qpair failed and we were unable to recover it. 00:33:42.775 [2024-11-27 07:28:53.765423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.775 [2024-11-27 07:28:53.765454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.775 qpair failed and we were unable to recover it. 00:33:42.775 [2024-11-27 07:28:53.765698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.775 [2024-11-27 07:28:53.765728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.775 qpair failed and we were unable to recover it. 00:33:42.775 [2024-11-27 07:28:53.766007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.775 [2024-11-27 07:28:53.766037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.775 qpair failed and we were unable to recover it. 00:33:42.775 [2024-11-27 07:28:53.766320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.775 [2024-11-27 07:28:53.766351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.775 qpair failed and we were unable to recover it. 00:33:42.775 [2024-11-27 07:28:53.766586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.775 [2024-11-27 07:28:53.766616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.775 qpair failed and we were unable to recover it. 00:33:42.775 07:28:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.775 [2024-11-27 07:28:53.766973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.775 [2024-11-27 07:28:53.767002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.775 qpair failed and we were unable to recover it. 00:33:42.775 07:28:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:42.775 [2024-11-27 07:28:53.767368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.775 [2024-11-27 07:28:53.767400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.775 qpair failed and we were unable to recover it. 00:33:42.775 07:28:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.775 [2024-11-27 07:28:53.767614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.775 [2024-11-27 07:28:53.767643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.775 qpair failed and we were unable to recover it. 00:33:42.775 07:28:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:42.775 [2024-11-27 07:28:53.768033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.775 [2024-11-27 07:28:53.768064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.775 qpair failed and we were unable to recover it. 00:33:42.775 [2024-11-27 07:28:53.768467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.775 [2024-11-27 07:28:53.768498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.775 qpair failed and we were unable to recover it. 00:33:42.775 [2024-11-27 07:28:53.768783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.775 [2024-11-27 07:28:53.768812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.775 qpair failed and we were unable to recover it. 00:33:42.775 [2024-11-27 07:28:53.769095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.775 [2024-11-27 07:28:53.769129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.775 qpair failed and we were unable to recover it. 00:33:42.775 [2024-11-27 07:28:53.769491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.775 [2024-11-27 07:28:53.769522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.775 qpair failed and we were unable to recover it. 00:33:42.775 [2024-11-27 07:28:53.769878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.775 [2024-11-27 07:28:53.769908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.775 qpair failed and we were unable to recover it. 00:33:42.775 [2024-11-27 07:28:53.770279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.775 [2024-11-27 07:28:53.770311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.775 qpair failed and we were unable to recover it. 00:33:42.775 [2024-11-27 07:28:53.770668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.775 [2024-11-27 07:28:53.770700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.775 qpair failed and we were unable to recover it. 00:33:42.775 [2024-11-27 07:28:53.771063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.775 [2024-11-27 07:28:53.771094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.775 qpair failed and we were unable to recover it. 00:33:42.775 [2024-11-27 07:28:53.771308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.775 [2024-11-27 07:28:53.771338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.775 qpair failed and we were unable to recover it. 00:33:42.775 [2024-11-27 07:28:53.771707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.775 [2024-11-27 07:28:53.771737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.775 qpair failed and we were unable to recover it. 00:33:42.775 [2024-11-27 07:28:53.772108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.775 [2024-11-27 07:28:53.772137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.775 qpair failed and we were unable to recover it. 00:33:42.775 [2024-11-27 07:28:53.772567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.775 [2024-11-27 07:28:53.772596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.775 qpair failed and we were unable to recover it. 00:33:42.775 [2024-11-27 07:28:53.772940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.775 [2024-11-27 07:28:53.772971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.775 qpair failed and we were unable to recover it. 00:33:42.776 [2024-11-27 07:28:53.773324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.776 [2024-11-27 07:28:53.773355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.776 qpair failed and we were unable to recover it. 00:33:42.776 [2024-11-27 07:28:53.773606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.776 [2024-11-27 07:28:53.773635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.776 qpair failed and we were unable to recover it. 00:33:42.776 [2024-11-27 07:28:53.774002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.776 [2024-11-27 07:28:53.774032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.776 qpair failed and we were unable to recover it. 00:33:42.776 [2024-11-27 07:28:53.774452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.776 [2024-11-27 07:28:53.774482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.776 qpair failed and we were unable to recover it. 00:33:42.776 [2024-11-27 07:28:53.774893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.776 [2024-11-27 07:28:53.774921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.776 qpair failed and we were unable to recover it. 00:33:42.776 [2024-11-27 07:28:53.775292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.776 [2024-11-27 07:28:53.775322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.776 qpair failed and we were unable to recover it. 00:33:42.776 [2024-11-27 07:28:53.775704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.776 [2024-11-27 07:28:53.775735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.776 qpair failed and we were unable to recover it. 00:33:42.776 [2024-11-27 07:28:53.775969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.776 [2024-11-27 07:28:53.776000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.776 qpair failed and we were unable to recover it. 00:33:42.776 [2024-11-27 07:28:53.776350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.776 [2024-11-27 07:28:53.776383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.776 qpair failed and we were unable to recover it. 00:33:42.776 [2024-11-27 07:28:53.776676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.776 [2024-11-27 07:28:53.776708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.776 qpair failed and we were unable to recover it. 00:33:42.776 [2024-11-27 07:28:53.776936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.776 [2024-11-27 07:28:53.776965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.776 qpair failed and we were unable to recover it. 00:33:42.776 [2024-11-27 07:28:53.777434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.776 [2024-11-27 07:28:53.777465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.776 qpair failed and we were unable to recover it. 00:33:42.776 [2024-11-27 07:28:53.777814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.776 [2024-11-27 07:28:53.777842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.776 qpair failed and we were unable to recover it. 00:33:42.776 [2024-11-27 07:28:53.778209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.776 [2024-11-27 07:28:53.778241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.776 qpair failed and we were unable to recover it. 00:33:42.776 [2024-11-27 07:28:53.778624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.776 [2024-11-27 07:28:53.778658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.776 qpair failed and we were unable to recover it. 00:33:42.776 07:28:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.776 [2024-11-27 07:28:53.778995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.776 [2024-11-27 07:28:53.779025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.776 qpair failed and we were unable to recover it. 00:33:42.776 07:28:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:42.776 [2024-11-27 07:28:53.779363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.776 [2024-11-27 07:28:53.779395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.776 qpair failed and we were unable to recover it. 00:33:42.776 07:28:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.776 07:28:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:42.776 [2024-11-27 07:28:53.779774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.776 [2024-11-27 07:28:53.779804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.776 qpair failed and we were unable to recover it. 00:33:42.776 [2024-11-27 07:28:53.780178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.776 [2024-11-27 07:28:53.780210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.776 qpair failed and we were unable to recover it. 00:33:42.776 [2024-11-27 07:28:53.780597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.776 [2024-11-27 07:28:53.780626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.776 qpair failed and we were unable to recover it. 00:33:42.776 [2024-11-27 07:28:53.780989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.776 [2024-11-27 07:28:53.781020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.776 qpair failed and we were unable to recover it. 00:33:42.776 [2024-11-27 07:28:53.781358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.776 [2024-11-27 07:28:53.781388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.776 qpair failed and we were unable to recover it. 00:33:42.776 [2024-11-27 07:28:53.781643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.776 [2024-11-27 07:28:53.781676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.776 qpair failed and we were unable to recover it. 00:33:42.776 [2024-11-27 07:28:53.781929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.776 [2024-11-27 07:28:53.781969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.776 qpair failed and we were unable to recover it. 00:33:42.776 [2024-11-27 07:28:53.782332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.776 [2024-11-27 07:28:53.782362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.776 qpair failed and we were unable to recover it. 00:33:42.776 [2024-11-27 07:28:53.782596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.776 [2024-11-27 07:28:53.782624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.776 qpair failed and we were unable to recover it. 00:33:42.776 [2024-11-27 07:28:53.782990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.776 [2024-11-27 07:28:53.783019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.776 qpair failed and we were unable to recover it. 00:33:42.776 [2024-11-27 07:28:53.783395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.776 [2024-11-27 07:28:53.783426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.776 qpair failed and we were unable to recover it. 00:33:42.776 [2024-11-27 07:28:53.783827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.776 [2024-11-27 07:28:53.783858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.776 qpair failed and we were unable to recover it. 00:33:42.776 [2024-11-27 07:28:53.784092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.776 [2024-11-27 07:28:53.784120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.776 qpair failed and we were unable to recover it. 00:33:42.776 [2024-11-27 07:28:53.784376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.776 [2024-11-27 07:28:53.784409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.776 qpair failed and we were unable to recover it. 00:33:42.776 [2024-11-27 07:28:53.784789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.776 [2024-11-27 07:28:53.784818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.776 qpair failed and we were unable to recover it. 00:33:42.776 [2024-11-27 07:28:53.785184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.776 [2024-11-27 07:28:53.785217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.776 qpair failed and we were unable to recover it. 00:33:42.776 [2024-11-27 07:28:53.785626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.776 [2024-11-27 07:28:53.785657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.776 qpair failed and we were unable to recover it. 00:33:42.776 [2024-11-27 07:28:53.786014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.776 [2024-11-27 07:28:53.786042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.776 qpair failed and we were unable to recover it. 00:33:42.776 [2024-11-27 07:28:53.786399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.776 [2024-11-27 07:28:53.786428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.776 qpair failed and we were unable to recover it. 00:33:42.777 [2024-11-27 07:28:53.786792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.777 [2024-11-27 07:28:53.786821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.777 qpair failed and we were unable to recover it. 00:33:42.777 [2024-11-27 07:28:53.787188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.777 [2024-11-27 07:28:53.787221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.777 qpair failed and we were unable to recover it. 00:33:42.777 [2024-11-27 07:28:53.787576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.777 [2024-11-27 07:28:53.787606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.777 qpair failed and we were unable to recover it. 00:33:42.777 [2024-11-27 07:28:53.787974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.777 [2024-11-27 07:28:53.788004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.777 qpair failed and we were unable to recover it. 00:33:42.777 [2024-11-27 07:28:53.788403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.777 [2024-11-27 07:28:53.788434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.777 qpair failed and we were unable to recover it. 00:33:42.777 [2024-11-27 07:28:53.788809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.777 [2024-11-27 07:28:53.788839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.777 qpair failed and we were unable to recover it. 00:33:42.777 [2024-11-27 07:28:53.789220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.777 [2024-11-27 07:28:53.789252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.777 qpair failed and we were unable to recover it. 00:33:42.777 [2024-11-27 07:28:53.789629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.777 [2024-11-27 07:28:53.789659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.777 qpair failed and we were unable to recover it. 00:33:42.777 [2024-11-27 07:28:53.790015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.777 [2024-11-27 07:28:53.790044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.777 qpair failed and we were unable to recover it. 00:33:42.777 [2024-11-27 07:28:53.790436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.777 [2024-11-27 07:28:53.790465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.777 qpair failed and we were unable to recover it. 00:33:42.777 [2024-11-27 07:28:53.790811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.777 [2024-11-27 07:28:53.790841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b9 07:28:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.777 0 with addr=10.0.0.2, port=4420 00:33:42.777 qpair failed and we were unable to recover it. 00:33:42.777 [2024-11-27 07:28:53.791117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.777 [2024-11-27 07:28:53.791146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.777 qpair failed and we were unable to recover it. 00:33:42.777 07:28:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:42.777 [2024-11-27 07:28:53.791526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.777 [2024-11-27 07:28:53.791556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.777 qpair failed and we were unable to recover it. 00:33:42.777 07:28:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.777 [2024-11-27 07:28:53.791824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.777 [2024-11-27 07:28:53.791856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.777 qpair failed and we were unable to recover it. 00:33:42.777 07:28:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:42.777 [2024-11-27 07:28:53.792069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.777 [2024-11-27 07:28:53.792101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.777 qpair failed and we were unable to recover it. 00:33:42.777 [2024-11-27 07:28:53.792515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.777 [2024-11-27 07:28:53.792548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.777 qpair failed and we were unable to recover it. 00:33:42.777 [2024-11-27 07:28:53.792881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.777 [2024-11-27 07:28:53.792915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.777 qpair failed and we were unable to recover it. 00:33:42.777 [2024-11-27 07:28:53.793306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.777 [2024-11-27 07:28:53.793337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.777 qpair failed and we were unable to recover it. 00:33:42.777 [2024-11-27 07:28:53.793711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.777 [2024-11-27 07:28:53.793740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.777 qpair failed and we were unable to recover it. 00:33:42.777 [2024-11-27 07:28:53.794117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.777 [2024-11-27 07:28:53.794147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.777 qpair failed and we were unable to recover it. 00:33:42.777 [2024-11-27 07:28:53.794531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.777 [2024-11-27 07:28:53.794561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.777 qpair failed and we were unable to recover it. 00:33:42.777 [2024-11-27 07:28:53.794833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.777 [2024-11-27 07:28:53.794861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.777 qpair failed and we were unable to recover it. 00:33:42.777 [2024-11-27 07:28:53.795190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.777 [2024-11-27 07:28:53.795221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.777 qpair failed and we were unable to recover it. 00:33:42.777 [2024-11-27 07:28:53.795601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.777 [2024-11-27 07:28:53.795630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.777 qpair failed and we were unable to recover it. 00:33:42.777 [2024-11-27 07:28:53.795983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.777 [2024-11-27 07:28:53.796013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.777 qpair failed and we were unable to recover it. 00:33:42.777 [2024-11-27 07:28:53.796381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.777 [2024-11-27 07:28:53.796418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.777 qpair failed and we were unable to recover it. 00:33:42.777 [2024-11-27 07:28:53.796744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.777 [2024-11-27 07:28:53.796773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.777 qpair failed and we were unable to recover it. 00:33:42.777 [2024-11-27 07:28:53.797031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.777 [2024-11-27 07:28:53.797059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.777 qpair failed and we were unable to recover it. 00:33:42.777 [2024-11-27 07:28:53.797402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.777 [2024-11-27 07:28:53.797433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.777 qpair failed and we were unable to recover it. 00:33:42.777 [2024-11-27 07:28:53.797819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.777 [2024-11-27 07:28:53.797848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b6c000b90 with addr=10.0.0.2, port=4420 00:33:42.777 qpair failed and we were unable to recover it. 00:33:42.777 [2024-11-27 07:28:53.798030] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:42.777 07:28:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.777 07:28:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:42.778 07:28:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.778 07:28:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:42.778 [2024-11-27 07:28:53.808939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.778 [2024-11-27 07:28:53.809078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.778 [2024-11-27 07:28:53.809130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.778 [2024-11-27 07:28:53.809154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.778 [2024-11-27 07:28:53.809189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:42.778 [2024-11-27 07:28:53.809245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.778 qpair failed and we were unable to recover it. 00:33:42.778 07:28:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.778 07:28:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2586144 00:33:42.778 [2024-11-27 07:28:53.818808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.778 [2024-11-27 07:28:53.818905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.778 [2024-11-27 07:28:53.818936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.778 [2024-11-27 07:28:53.818953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.778 [2024-11-27 07:28:53.818967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:42.778 [2024-11-27 07:28:53.818999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.778 qpair failed and we were unable to recover it. 00:33:42.778 [2024-11-27 07:28:53.828683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.778 [2024-11-27 07:28:53.828777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.778 [2024-11-27 07:28:53.828802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.778 [2024-11-27 07:28:53.828813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.778 [2024-11-27 07:28:53.828823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:42.778 [2024-11-27 07:28:53.828846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.778 qpair failed and we were unable to recover it. 00:33:42.778 [2024-11-27 07:28:53.838663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.778 [2024-11-27 07:28:53.838743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.778 [2024-11-27 07:28:53.838761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.778 [2024-11-27 07:28:53.838768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.778 [2024-11-27 07:28:53.838775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:42.778 [2024-11-27 07:28:53.838792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.778 qpair failed and we were unable to recover it. 00:33:42.778 [2024-11-27 07:28:53.848797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.778 [2024-11-27 07:28:53.848876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.778 [2024-11-27 07:28:53.848895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.778 [2024-11-27 07:28:53.848902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.778 [2024-11-27 07:28:53.848909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:42.778 [2024-11-27 07:28:53.848927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.778 qpair failed and we were unable to recover it. 00:33:42.778 [2024-11-27 07:28:53.858734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.778 [2024-11-27 07:28:53.858804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.778 [2024-11-27 07:28:53.858821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.778 [2024-11-27 07:28:53.858829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.778 [2024-11-27 07:28:53.858836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:42.778 [2024-11-27 07:28:53.858853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.778 qpair failed and we were unable to recover it. 00:33:42.778 [2024-11-27 07:28:53.868776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.778 [2024-11-27 07:28:53.868844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.778 [2024-11-27 07:28:53.868861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.778 [2024-11-27 07:28:53.868869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.778 [2024-11-27 07:28:53.868875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:42.778 [2024-11-27 07:28:53.868892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.778 qpair failed and we were unable to recover it. 00:33:42.778 [2024-11-27 07:28:53.878815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.778 [2024-11-27 07:28:53.878889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.778 [2024-11-27 07:28:53.878907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.778 [2024-11-27 07:28:53.878915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.778 [2024-11-27 07:28:53.878921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:42.778 [2024-11-27 07:28:53.878939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.778 qpair failed and we were unable to recover it. 00:33:42.778 [2024-11-27 07:28:53.888883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.778 [2024-11-27 07:28:53.888962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.778 [2024-11-27 07:28:53.888979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.778 [2024-11-27 07:28:53.888987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.778 [2024-11-27 07:28:53.888993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:42.778 [2024-11-27 07:28:53.889010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.778 qpair failed and we were unable to recover it. 00:33:42.778 [2024-11-27 07:28:53.898929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.778 [2024-11-27 07:28:53.899034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.778 [2024-11-27 07:28:53.899051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.778 [2024-11-27 07:28:53.899059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.778 [2024-11-27 07:28:53.899065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:42.778 [2024-11-27 07:28:53.899082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.778 qpair failed and we were unable to recover it. 00:33:42.778 [2024-11-27 07:28:53.908944] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.778 [2024-11-27 07:28:53.909047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.778 [2024-11-27 07:28:53.909064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.778 [2024-11-27 07:28:53.909077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.778 [2024-11-27 07:28:53.909084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:42.778 [2024-11-27 07:28:53.909101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.778 qpair failed and we were unable to recover it. 00:33:42.778 [2024-11-27 07:28:53.918963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.778 [2024-11-27 07:28:53.919033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.779 [2024-11-27 07:28:53.919051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.779 [2024-11-27 07:28:53.919058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.779 [2024-11-27 07:28:53.919064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:42.779 [2024-11-27 07:28:53.919080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.779 qpair failed and we were unable to recover it. 00:33:42.779 [2024-11-27 07:28:53.929001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.779 [2024-11-27 07:28:53.929074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.779 [2024-11-27 07:28:53.929092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.779 [2024-11-27 07:28:53.929099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.779 [2024-11-27 07:28:53.929106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:42.779 [2024-11-27 07:28:53.929122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.779 qpair failed and we were unable to recover it. 00:33:42.779 [2024-11-27 07:28:53.938988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.779 [2024-11-27 07:28:53.939055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.779 [2024-11-27 07:28:53.939072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.779 [2024-11-27 07:28:53.939079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.779 [2024-11-27 07:28:53.939086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:42.779 [2024-11-27 07:28:53.939102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.779 qpair failed and we were unable to recover it. 00:33:42.779 [2024-11-27 07:28:53.949206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.779 [2024-11-27 07:28:53.949295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.779 [2024-11-27 07:28:53.949313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.779 [2024-11-27 07:28:53.949320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.779 [2024-11-27 07:28:53.949327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:42.779 [2024-11-27 07:28:53.949350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.779 qpair failed and we were unable to recover it. 00:33:42.779 [2024-11-27 07:28:53.959082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.779 [2024-11-27 07:28:53.959153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.779 [2024-11-27 07:28:53.959177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.779 [2024-11-27 07:28:53.959184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.779 [2024-11-27 07:28:53.959191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:42.779 [2024-11-27 07:28:53.959208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:42.779 qpair failed and we were unable to recover it. 00:33:43.043 [2024-11-27 07:28:53.969192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.043 [2024-11-27 07:28:53.969269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.043 [2024-11-27 07:28:53.969286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.043 [2024-11-27 07:28:53.969294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.043 [2024-11-27 07:28:53.969301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.043 [2024-11-27 07:28:53.969317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.043 qpair failed and we were unable to recover it. 00:33:43.043 [2024-11-27 07:28:53.979176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.043 [2024-11-27 07:28:53.979248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.043 [2024-11-27 07:28:53.979266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.043 [2024-11-27 07:28:53.979274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.043 [2024-11-27 07:28:53.979281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.043 [2024-11-27 07:28:53.979299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.043 qpair failed and we were unable to recover it. 00:33:43.043 [2024-11-27 07:28:53.989156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.043 [2024-11-27 07:28:53.989271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.043 [2024-11-27 07:28:53.989291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.043 [2024-11-27 07:28:53.989300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.043 [2024-11-27 07:28:53.989307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.043 [2024-11-27 07:28:53.989324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.043 qpair failed and we were unable to recover it. 00:33:43.043 [2024-11-27 07:28:53.999185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.043 [2024-11-27 07:28:53.999303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.043 [2024-11-27 07:28:53.999320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.043 [2024-11-27 07:28:53.999327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.043 [2024-11-27 07:28:53.999334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.044 [2024-11-27 07:28:53.999350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.044 qpair failed and we were unable to recover it. 00:33:43.044 [2024-11-27 07:28:54.009239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.044 [2024-11-27 07:28:54.009321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.044 [2024-11-27 07:28:54.009340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.044 [2024-11-27 07:28:54.009350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.044 [2024-11-27 07:28:54.009359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.044 [2024-11-27 07:28:54.009376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.044 qpair failed and we were unable to recover it. 00:33:43.044 [2024-11-27 07:28:54.019203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.044 [2024-11-27 07:28:54.019270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.044 [2024-11-27 07:28:54.019288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.044 [2024-11-27 07:28:54.019296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.044 [2024-11-27 07:28:54.019302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.044 [2024-11-27 07:28:54.019319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.044 qpair failed and we were unable to recover it. 00:33:43.044 [2024-11-27 07:28:54.029265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.044 [2024-11-27 07:28:54.029360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.044 [2024-11-27 07:28:54.029378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.044 [2024-11-27 07:28:54.029386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.044 [2024-11-27 07:28:54.029392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.044 [2024-11-27 07:28:54.029409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.044 qpair failed and we were unable to recover it. 00:33:43.044 [2024-11-27 07:28:54.039287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.044 [2024-11-27 07:28:54.039359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.044 [2024-11-27 07:28:54.039380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.044 [2024-11-27 07:28:54.039388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.044 [2024-11-27 07:28:54.039395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.044 [2024-11-27 07:28:54.039411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.044 qpair failed and we were unable to recover it. 00:33:43.044 [2024-11-27 07:28:54.049342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.044 [2024-11-27 07:28:54.049407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.044 [2024-11-27 07:28:54.049424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.044 [2024-11-27 07:28:54.049432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.044 [2024-11-27 07:28:54.049438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.044 [2024-11-27 07:28:54.049455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.044 qpair failed and we were unable to recover it. 00:33:43.044 [2024-11-27 07:28:54.059337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.044 [2024-11-27 07:28:54.059406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.044 [2024-11-27 07:28:54.059422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.044 [2024-11-27 07:28:54.059430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.044 [2024-11-27 07:28:54.059437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.044 [2024-11-27 07:28:54.059453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.044 qpair failed and we were unable to recover it. 00:33:43.044 [2024-11-27 07:28:54.069369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.044 [2024-11-27 07:28:54.069434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.044 [2024-11-27 07:28:54.069450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.044 [2024-11-27 07:28:54.069458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.044 [2024-11-27 07:28:54.069465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.044 [2024-11-27 07:28:54.069482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.044 qpair failed and we were unable to recover it. 00:33:43.044 [2024-11-27 07:28:54.079407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.044 [2024-11-27 07:28:54.079482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.044 [2024-11-27 07:28:54.079500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.044 [2024-11-27 07:28:54.079507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.044 [2024-11-27 07:28:54.079514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.044 [2024-11-27 07:28:54.079536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.044 qpair failed and we were unable to recover it. 00:33:43.044 [2024-11-27 07:28:54.089476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.044 [2024-11-27 07:28:54.089544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.044 [2024-11-27 07:28:54.089561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.044 [2024-11-27 07:28:54.089569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.044 [2024-11-27 07:28:54.089575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.044 [2024-11-27 07:28:54.089592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.044 qpair failed and we were unable to recover it. 00:33:43.044 [2024-11-27 07:28:54.099492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.044 [2024-11-27 07:28:54.099559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.044 [2024-11-27 07:28:54.099575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.044 [2024-11-27 07:28:54.099582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.044 [2024-11-27 07:28:54.099589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.044 [2024-11-27 07:28:54.099607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.044 qpair failed and we were unable to recover it. 00:33:43.044 [2024-11-27 07:28:54.109488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.045 [2024-11-27 07:28:54.109565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.045 [2024-11-27 07:28:54.109583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.045 [2024-11-27 07:28:54.109592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.045 [2024-11-27 07:28:54.109602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.045 [2024-11-27 07:28:54.109621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.045 qpair failed and we were unable to recover it. 00:33:43.045 [2024-11-27 07:28:54.119524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.045 [2024-11-27 07:28:54.119595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.045 [2024-11-27 07:28:54.119612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.045 [2024-11-27 07:28:54.119620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.045 [2024-11-27 07:28:54.119626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.045 [2024-11-27 07:28:54.119642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.045 qpair failed and we were unable to recover it. 00:33:43.045 [2024-11-27 07:28:54.129568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.045 [2024-11-27 07:28:54.129647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.045 [2024-11-27 07:28:54.129664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.045 [2024-11-27 07:28:54.129671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.045 [2024-11-27 07:28:54.129678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.045 [2024-11-27 07:28:54.129694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.045 qpair failed and we were unable to recover it. 00:33:43.045 [2024-11-27 07:28:54.139584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.045 [2024-11-27 07:28:54.139653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.045 [2024-11-27 07:28:54.139669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.045 [2024-11-27 07:28:54.139677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.045 [2024-11-27 07:28:54.139684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.045 [2024-11-27 07:28:54.139700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.045 qpair failed and we were unable to recover it. 00:33:43.045 [2024-11-27 07:28:54.149606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.045 [2024-11-27 07:28:54.149668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.045 [2024-11-27 07:28:54.149686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.045 [2024-11-27 07:28:54.149693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.045 [2024-11-27 07:28:54.149700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.045 [2024-11-27 07:28:54.149717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.045 qpair failed and we were unable to recover it. 00:33:43.045 [2024-11-27 07:28:54.159633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.045 [2024-11-27 07:28:54.159708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.045 [2024-11-27 07:28:54.159724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.045 [2024-11-27 07:28:54.159732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.045 [2024-11-27 07:28:54.159739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.045 [2024-11-27 07:28:54.159755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.045 qpair failed and we were unable to recover it. 00:33:43.045 [2024-11-27 07:28:54.169701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.045 [2024-11-27 07:28:54.169818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.045 [2024-11-27 07:28:54.169842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.045 [2024-11-27 07:28:54.169849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.045 [2024-11-27 07:28:54.169856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.045 [2024-11-27 07:28:54.169872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.045 qpair failed and we were unable to recover it. 00:33:43.045 [2024-11-27 07:28:54.179683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.045 [2024-11-27 07:28:54.179748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.045 [2024-11-27 07:28:54.179766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.045 [2024-11-27 07:28:54.179773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.045 [2024-11-27 07:28:54.179780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.045 [2024-11-27 07:28:54.179796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.045 qpair failed and we were unable to recover it. 00:33:43.045 [2024-11-27 07:28:54.189731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.045 [2024-11-27 07:28:54.189796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.045 [2024-11-27 07:28:54.189812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.045 [2024-11-27 07:28:54.189820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.045 [2024-11-27 07:28:54.189827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.045 [2024-11-27 07:28:54.189843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.045 qpair failed and we were unable to recover it. 00:33:43.045 [2024-11-27 07:28:54.199776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.045 [2024-11-27 07:28:54.199842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.045 [2024-11-27 07:28:54.199859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.045 [2024-11-27 07:28:54.199867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.045 [2024-11-27 07:28:54.199874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.045 [2024-11-27 07:28:54.199890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.045 qpair failed and we were unable to recover it. 00:33:43.046 [2024-11-27 07:28:54.209808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.046 [2024-11-27 07:28:54.209888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.046 [2024-11-27 07:28:54.209922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.046 [2024-11-27 07:28:54.209932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.046 [2024-11-27 07:28:54.209947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.046 [2024-11-27 07:28:54.209971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.046 qpair failed and we were unable to recover it. 00:33:43.046 [2024-11-27 07:28:54.219818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.046 [2024-11-27 07:28:54.219932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.046 [2024-11-27 07:28:54.219968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.046 [2024-11-27 07:28:54.219977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.046 [2024-11-27 07:28:54.219985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.046 [2024-11-27 07:28:54.220008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.046 qpair failed and we were unable to recover it. 00:33:43.046 [2024-11-27 07:28:54.229858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.046 [2024-11-27 07:28:54.229931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.046 [2024-11-27 07:28:54.229950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.046 [2024-11-27 07:28:54.229958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.046 [2024-11-27 07:28:54.229965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.046 [2024-11-27 07:28:54.229983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.046 qpair failed and we were unable to recover it. 00:33:43.046 [2024-11-27 07:28:54.239883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.046 [2024-11-27 07:28:54.239952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.046 [2024-11-27 07:28:54.239971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.046 [2024-11-27 07:28:54.239980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.046 [2024-11-27 07:28:54.239987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.046 [2024-11-27 07:28:54.240005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.046 qpair failed and we were unable to recover it. 00:33:43.309 [2024-11-27 07:28:54.249932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.309 [2024-11-27 07:28:54.250004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.309 [2024-11-27 07:28:54.250021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.309 [2024-11-27 07:28:54.250029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.309 [2024-11-27 07:28:54.250036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.309 [2024-11-27 07:28:54.250053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-11-27 07:28:54.259920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.309 [2024-11-27 07:28:54.259978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.309 [2024-11-27 07:28:54.259995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.309 [2024-11-27 07:28:54.260002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.309 [2024-11-27 07:28:54.260009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.309 [2024-11-27 07:28:54.260027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-11-27 07:28:54.269944] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.309 [2024-11-27 07:28:54.270009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.309 [2024-11-27 07:28:54.270028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.309 [2024-11-27 07:28:54.270036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.309 [2024-11-27 07:28:54.270042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.309 [2024-11-27 07:28:54.270060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-11-27 07:28:54.280007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.309 [2024-11-27 07:28:54.280075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.309 [2024-11-27 07:28:54.280095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.309 [2024-11-27 07:28:54.280103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.309 [2024-11-27 07:28:54.280109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.309 [2024-11-27 07:28:54.280127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-11-27 07:28:54.290075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.309 [2024-11-27 07:28:54.290152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.309 [2024-11-27 07:28:54.290178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.309 [2024-11-27 07:28:54.290185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.309 [2024-11-27 07:28:54.290192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.309 [2024-11-27 07:28:54.290209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-11-27 07:28:54.300102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.309 [2024-11-27 07:28:54.300167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.309 [2024-11-27 07:28:54.300191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.309 [2024-11-27 07:28:54.300199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.309 [2024-11-27 07:28:54.300205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.309 [2024-11-27 07:28:54.300223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-11-27 07:28:54.310078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.309 [2024-11-27 07:28:54.310141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.309 [2024-11-27 07:28:54.310163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.309 [2024-11-27 07:28:54.310172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.309 [2024-11-27 07:28:54.310179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.309 [2024-11-27 07:28:54.310196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-11-27 07:28:54.320149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.309 [2024-11-27 07:28:54.320279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.309 [2024-11-27 07:28:54.320295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.309 [2024-11-27 07:28:54.320302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.309 [2024-11-27 07:28:54.320309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.309 [2024-11-27 07:28:54.320325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-11-27 07:28:54.330187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.309 [2024-11-27 07:28:54.330257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.309 [2024-11-27 07:28:54.330273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.309 [2024-11-27 07:28:54.330281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.309 [2024-11-27 07:28:54.330287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.309 [2024-11-27 07:28:54.330303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-11-27 07:28:54.340176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.309 [2024-11-27 07:28:54.340272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.309 [2024-11-27 07:28:54.340287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.309 [2024-11-27 07:28:54.340295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.309 [2024-11-27 07:28:54.340307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.309 [2024-11-27 07:28:54.340323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.310 [2024-11-27 07:28:54.350203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.310 [2024-11-27 07:28:54.350260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.310 [2024-11-27 07:28:54.350275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.310 [2024-11-27 07:28:54.350283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.310 [2024-11-27 07:28:54.350289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.310 [2024-11-27 07:28:54.350305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.310 qpair failed and we were unable to recover it. 00:33:43.310 [2024-11-27 07:28:54.360245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.310 [2024-11-27 07:28:54.360362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.310 [2024-11-27 07:28:54.360379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.310 [2024-11-27 07:28:54.360386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.310 [2024-11-27 07:28:54.360393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.310 [2024-11-27 07:28:54.360409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.310 qpair failed and we were unable to recover it. 00:33:43.310 [2024-11-27 07:28:54.370311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.310 [2024-11-27 07:28:54.370384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.310 [2024-11-27 07:28:54.370400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.310 [2024-11-27 07:28:54.370407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.310 [2024-11-27 07:28:54.370414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.310 [2024-11-27 07:28:54.370430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.310 qpair failed and we were unable to recover it. 00:33:43.310 [2024-11-27 07:28:54.380286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.310 [2024-11-27 07:28:54.380349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.310 [2024-11-27 07:28:54.380366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.310 [2024-11-27 07:28:54.380373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.310 [2024-11-27 07:28:54.380380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.310 [2024-11-27 07:28:54.380396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.310 qpair failed and we were unable to recover it. 00:33:43.310 [2024-11-27 07:28:54.390326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.310 [2024-11-27 07:28:54.390382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.310 [2024-11-27 07:28:54.390399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.310 [2024-11-27 07:28:54.390406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.310 [2024-11-27 07:28:54.390413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.310 [2024-11-27 07:28:54.390429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.310 qpair failed and we were unable to recover it. 00:33:43.310 [2024-11-27 07:28:54.400328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.310 [2024-11-27 07:28:54.400406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.310 [2024-11-27 07:28:54.400422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.310 [2024-11-27 07:28:54.400430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.310 [2024-11-27 07:28:54.400436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.310 [2024-11-27 07:28:54.400452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.310 qpair failed and we were unable to recover it. 00:33:43.310 [2024-11-27 07:28:54.410473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.310 [2024-11-27 07:28:54.410586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.310 [2024-11-27 07:28:54.410602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.310 [2024-11-27 07:28:54.410610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.310 [2024-11-27 07:28:54.410616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.310 [2024-11-27 07:28:54.410633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.310 qpair failed and we were unable to recover it. 00:33:43.310 [2024-11-27 07:28:54.420439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.310 [2024-11-27 07:28:54.420497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.310 [2024-11-27 07:28:54.420514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.310 [2024-11-27 07:28:54.420521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.310 [2024-11-27 07:28:54.420528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.310 [2024-11-27 07:28:54.420544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.310 qpair failed and we were unable to recover it. 00:33:43.310 [2024-11-27 07:28:54.430477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.310 [2024-11-27 07:28:54.430543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.310 [2024-11-27 07:28:54.430560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.310 [2024-11-27 07:28:54.430567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.310 [2024-11-27 07:28:54.430574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.310 [2024-11-27 07:28:54.430590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.310 qpair failed and we were unable to recover it. 00:33:43.310 [2024-11-27 07:28:54.440503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.310 [2024-11-27 07:28:54.440579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.310 [2024-11-27 07:28:54.440599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.310 [2024-11-27 07:28:54.440606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.310 [2024-11-27 07:28:54.440616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.310 [2024-11-27 07:28:54.440634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.310 qpair failed and we were unable to recover it. 00:33:43.310 [2024-11-27 07:28:54.450569] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.311 [2024-11-27 07:28:54.450646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.311 [2024-11-27 07:28:54.450664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.311 [2024-11-27 07:28:54.450672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.311 [2024-11-27 07:28:54.450678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.311 [2024-11-27 07:28:54.450696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.311 [2024-11-27 07:28:54.460592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.311 [2024-11-27 07:28:54.460699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.311 [2024-11-27 07:28:54.460716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.311 [2024-11-27 07:28:54.460723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.311 [2024-11-27 07:28:54.460730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.311 [2024-11-27 07:28:54.460747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.311 [2024-11-27 07:28:54.470556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.311 [2024-11-27 07:28:54.470620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.311 [2024-11-27 07:28:54.470637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.311 [2024-11-27 07:28:54.470649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.311 [2024-11-27 07:28:54.470656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.311 [2024-11-27 07:28:54.470672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.311 [2024-11-27 07:28:54.480612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.311 [2024-11-27 07:28:54.480678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.311 [2024-11-27 07:28:54.480695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.311 [2024-11-27 07:28:54.480703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.311 [2024-11-27 07:28:54.480710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.311 [2024-11-27 07:28:54.480727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.311 [2024-11-27 07:28:54.490654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.311 [2024-11-27 07:28:54.490721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.311 [2024-11-27 07:28:54.490740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.311 [2024-11-27 07:28:54.490749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.311 [2024-11-27 07:28:54.490757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.311 [2024-11-27 07:28:54.490773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.311 [2024-11-27 07:28:54.500670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.311 [2024-11-27 07:28:54.500732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.311 [2024-11-27 07:28:54.500749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.311 [2024-11-27 07:28:54.500756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.311 [2024-11-27 07:28:54.500763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.311 [2024-11-27 07:28:54.500779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.573 [2024-11-27 07:28:54.510700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.573 [2024-11-27 07:28:54.510762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.573 [2024-11-27 07:28:54.510778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.573 [2024-11-27 07:28:54.510786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.573 [2024-11-27 07:28:54.510794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.573 [2024-11-27 07:28:54.510816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.573 qpair failed and we were unable to recover it. 00:33:43.573 [2024-11-27 07:28:54.520738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.573 [2024-11-27 07:28:54.520807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.573 [2024-11-27 07:28:54.520824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.573 [2024-11-27 07:28:54.520832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.573 [2024-11-27 07:28:54.520838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.573 [2024-11-27 07:28:54.520854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.574 qpair failed and we were unable to recover it. 00:33:43.574 [2024-11-27 07:28:54.530770] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.574 [2024-11-27 07:28:54.530841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.574 [2024-11-27 07:28:54.530858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.574 [2024-11-27 07:28:54.530866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.574 [2024-11-27 07:28:54.530873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.574 [2024-11-27 07:28:54.530889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.574 qpair failed and we were unable to recover it. 00:33:43.574 [2024-11-27 07:28:54.540775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.574 [2024-11-27 07:28:54.540840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.574 [2024-11-27 07:28:54.540857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.574 [2024-11-27 07:28:54.540864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.574 [2024-11-27 07:28:54.540871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.574 [2024-11-27 07:28:54.540888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.574 qpair failed and we were unable to recover it. 00:33:43.574 [2024-11-27 07:28:54.550849] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.574 [2024-11-27 07:28:54.550954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.574 [2024-11-27 07:28:54.550989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.574 [2024-11-27 07:28:54.550999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.574 [2024-11-27 07:28:54.551007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.574 [2024-11-27 07:28:54.551030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.574 qpair failed and we were unable to recover it. 00:33:43.574 [2024-11-27 07:28:54.560873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.574 [2024-11-27 07:28:54.560950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.574 [2024-11-27 07:28:54.560986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.574 [2024-11-27 07:28:54.560996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.574 [2024-11-27 07:28:54.561004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.574 [2024-11-27 07:28:54.561028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.574 qpair failed and we were unable to recover it. 00:33:43.574 [2024-11-27 07:28:54.570952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.574 [2024-11-27 07:28:54.571036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.574 [2024-11-27 07:28:54.571055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.574 [2024-11-27 07:28:54.571063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.574 [2024-11-27 07:28:54.571070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.574 [2024-11-27 07:28:54.571088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.574 qpair failed and we were unable to recover it. 00:33:43.574 [2024-11-27 07:28:54.580923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.574 [2024-11-27 07:28:54.580988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.574 [2024-11-27 07:28:54.581006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.574 [2024-11-27 07:28:54.581014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.574 [2024-11-27 07:28:54.581020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.574 [2024-11-27 07:28:54.581037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.574 qpair failed and we were unable to recover it. 00:33:43.574 [2024-11-27 07:28:54.590937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.574 [2024-11-27 07:28:54.591010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.574 [2024-11-27 07:28:54.591027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.574 [2024-11-27 07:28:54.591034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.574 [2024-11-27 07:28:54.591041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.574 [2024-11-27 07:28:54.591057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.574 qpair failed and we were unable to recover it. 00:33:43.574 [2024-11-27 07:28:54.600966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.574 [2024-11-27 07:28:54.601036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.574 [2024-11-27 07:28:54.601059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.574 [2024-11-27 07:28:54.601067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.574 [2024-11-27 07:28:54.601073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.574 [2024-11-27 07:28:54.601091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.574 qpair failed and we were unable to recover it. 00:33:43.574 [2024-11-27 07:28:54.610982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.574 [2024-11-27 07:28:54.611051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.574 [2024-11-27 07:28:54.611068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.574 [2024-11-27 07:28:54.611075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.574 [2024-11-27 07:28:54.611082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.574 [2024-11-27 07:28:54.611099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.574 qpair failed and we were unable to recover it. 00:33:43.574 [2024-11-27 07:28:54.621033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.574 [2024-11-27 07:28:54.621109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.574 [2024-11-27 07:28:54.621126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.574 [2024-11-27 07:28:54.621133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.574 [2024-11-27 07:28:54.621140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.574 [2024-11-27 07:28:54.621156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.574 qpair failed and we were unable to recover it. 00:33:43.575 [2024-11-27 07:28:54.631054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.575 [2024-11-27 07:28:54.631113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.575 [2024-11-27 07:28:54.631130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.575 [2024-11-27 07:28:54.631137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.575 [2024-11-27 07:28:54.631144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.575 [2024-11-27 07:28:54.631165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.575 qpair failed and we were unable to recover it. 00:33:43.575 [2024-11-27 07:28:54.641091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.575 [2024-11-27 07:28:54.641162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.575 [2024-11-27 07:28:54.641179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.575 [2024-11-27 07:28:54.641186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.575 [2024-11-27 07:28:54.641193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.575 [2024-11-27 07:28:54.641215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.575 qpair failed and we were unable to recover it. 00:33:43.575 [2024-11-27 07:28:54.651150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.575 [2024-11-27 07:28:54.651235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.575 [2024-11-27 07:28:54.651252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.575 [2024-11-27 07:28:54.651259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.575 [2024-11-27 07:28:54.651266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.575 [2024-11-27 07:28:54.651282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.575 qpair failed and we were unable to recover it. 00:33:43.575 [2024-11-27 07:28:54.661138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.575 [2024-11-27 07:28:54.661207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.575 [2024-11-27 07:28:54.661223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.575 [2024-11-27 07:28:54.661231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.575 [2024-11-27 07:28:54.661238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.575 [2024-11-27 07:28:54.661254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.575 qpair failed and we were unable to recover it. 00:33:43.575 [2024-11-27 07:28:54.671173] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.575 [2024-11-27 07:28:54.671244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.575 [2024-11-27 07:28:54.671261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.575 [2024-11-27 07:28:54.671268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.575 [2024-11-27 07:28:54.671275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.575 [2024-11-27 07:28:54.671291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.575 qpair failed and we were unable to recover it. 00:33:43.575 [2024-11-27 07:28:54.681234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.575 [2024-11-27 07:28:54.681305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.575 [2024-11-27 07:28:54.681323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.575 [2024-11-27 07:28:54.681331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.575 [2024-11-27 07:28:54.681337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.575 [2024-11-27 07:28:54.681354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.575 qpair failed and we were unable to recover it. 00:33:43.575 [2024-11-27 07:28:54.691285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.575 [2024-11-27 07:28:54.691367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.575 [2024-11-27 07:28:54.691386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.575 [2024-11-27 07:28:54.691393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.575 [2024-11-27 07:28:54.691400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.575 [2024-11-27 07:28:54.691418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.575 qpair failed and we were unable to recover it. 00:33:43.575 [2024-11-27 07:28:54.701155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.575 [2024-11-27 07:28:54.701223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.575 [2024-11-27 07:28:54.701240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.575 [2024-11-27 07:28:54.701248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.575 [2024-11-27 07:28:54.701255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.575 [2024-11-27 07:28:54.701272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.575 qpair failed and we were unable to recover it. 00:33:43.575 [2024-11-27 07:28:54.711217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.575 [2024-11-27 07:28:54.711282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.575 [2024-11-27 07:28:54.711299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.575 [2024-11-27 07:28:54.711306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.575 [2024-11-27 07:28:54.711313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.575 [2024-11-27 07:28:54.711329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.575 qpair failed and we were unable to recover it. 00:33:43.575 [2024-11-27 07:28:54.721356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.575 [2024-11-27 07:28:54.721421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.575 [2024-11-27 07:28:54.721438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.575 [2024-11-27 07:28:54.721446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.575 [2024-11-27 07:28:54.721452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.575 [2024-11-27 07:28:54.721469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.575 qpair failed and we were unable to recover it. 00:33:43.575 [2024-11-27 07:28:54.731424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.575 [2024-11-27 07:28:54.731497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.575 [2024-11-27 07:28:54.731519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.575 [2024-11-27 07:28:54.731526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.576 [2024-11-27 07:28:54.731533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.576 [2024-11-27 07:28:54.731550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.576 qpair failed and we were unable to recover it. 00:33:43.576 [2024-11-27 07:28:54.741425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.576 [2024-11-27 07:28:54.741520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.576 [2024-11-27 07:28:54.741537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.576 [2024-11-27 07:28:54.741544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.576 [2024-11-27 07:28:54.741552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.576 [2024-11-27 07:28:54.741568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.576 qpair failed and we were unable to recover it. 00:33:43.576 [2024-11-27 07:28:54.751446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.576 [2024-11-27 07:28:54.751536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.576 [2024-11-27 07:28:54.751553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.576 [2024-11-27 07:28:54.751560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.576 [2024-11-27 07:28:54.751567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.576 [2024-11-27 07:28:54.751583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.576 qpair failed and we were unable to recover it. 00:33:43.576 [2024-11-27 07:28:54.761482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.576 [2024-11-27 07:28:54.761564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.576 [2024-11-27 07:28:54.761580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.576 [2024-11-27 07:28:54.761588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.576 [2024-11-27 07:28:54.761595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.576 [2024-11-27 07:28:54.761612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.576 qpair failed and we were unable to recover it. 00:33:43.576 [2024-11-27 07:28:54.771535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.576 [2024-11-27 07:28:54.771603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.576 [2024-11-27 07:28:54.771620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.576 [2024-11-27 07:28:54.771627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.576 [2024-11-27 07:28:54.771639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.576 [2024-11-27 07:28:54.771656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.576 qpair failed and we were unable to recover it. 00:33:43.838 [2024-11-27 07:28:54.781565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.838 [2024-11-27 07:28:54.781629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.838 [2024-11-27 07:28:54.781649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.838 [2024-11-27 07:28:54.781657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.838 [2024-11-27 07:28:54.781668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.838 [2024-11-27 07:28:54.781686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.838 qpair failed and we were unable to recover it. 00:33:43.838 [2024-11-27 07:28:54.791576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.838 [2024-11-27 07:28:54.791636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.838 [2024-11-27 07:28:54.791653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.838 [2024-11-27 07:28:54.791661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.838 [2024-11-27 07:28:54.791668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.838 [2024-11-27 07:28:54.791684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.838 qpair failed and we were unable to recover it. 00:33:43.838 [2024-11-27 07:28:54.801606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.838 [2024-11-27 07:28:54.801670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.838 [2024-11-27 07:28:54.801687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.838 [2024-11-27 07:28:54.801694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.838 [2024-11-27 07:28:54.801701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.838 [2024-11-27 07:28:54.801718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.838 qpair failed and we were unable to recover it. 00:33:43.838 [2024-11-27 07:28:54.811656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.838 [2024-11-27 07:28:54.811724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.838 [2024-11-27 07:28:54.811741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.838 [2024-11-27 07:28:54.811748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.838 [2024-11-27 07:28:54.811754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.838 [2024-11-27 07:28:54.811770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.838 qpair failed and we were unable to recover it. 00:33:43.838 [2024-11-27 07:28:54.821620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.838 [2024-11-27 07:28:54.821679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.838 [2024-11-27 07:28:54.821695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.838 [2024-11-27 07:28:54.821703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.838 [2024-11-27 07:28:54.821709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.838 [2024-11-27 07:28:54.821725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.838 qpair failed and we were unable to recover it. 00:33:43.838 [2024-11-27 07:28:54.831673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.838 [2024-11-27 07:28:54.831739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.838 [2024-11-27 07:28:54.831755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.838 [2024-11-27 07:28:54.831762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.839 [2024-11-27 07:28:54.831769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.839 [2024-11-27 07:28:54.831786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.839 qpair failed and we were unable to recover it. 00:33:43.839 [2024-11-27 07:28:54.841725] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.839 [2024-11-27 07:28:54.841793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.839 [2024-11-27 07:28:54.841809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.839 [2024-11-27 07:28:54.841816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.839 [2024-11-27 07:28:54.841823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.839 [2024-11-27 07:28:54.841839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.839 qpair failed and we were unable to recover it. 00:33:43.839 [2024-11-27 07:28:54.851768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.839 [2024-11-27 07:28:54.851854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.839 [2024-11-27 07:28:54.851880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.839 [2024-11-27 07:28:54.851889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.839 [2024-11-27 07:28:54.851895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.839 [2024-11-27 07:28:54.851915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.839 qpair failed and we were unable to recover it. 00:33:43.839 [2024-11-27 07:28:54.861738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.839 [2024-11-27 07:28:54.861804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.839 [2024-11-27 07:28:54.861828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.839 [2024-11-27 07:28:54.861835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.839 [2024-11-27 07:28:54.861842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.839 [2024-11-27 07:28:54.861860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.839 qpair failed and we were unable to recover it. 00:33:43.839 [2024-11-27 07:28:54.871827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.839 [2024-11-27 07:28:54.871924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.839 [2024-11-27 07:28:54.871943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.839 [2024-11-27 07:28:54.871950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.839 [2024-11-27 07:28:54.871958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.839 [2024-11-27 07:28:54.871974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.839 qpair failed and we were unable to recover it. 00:33:43.839 [2024-11-27 07:28:54.881804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.839 [2024-11-27 07:28:54.881870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.839 [2024-11-27 07:28:54.881888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.839 [2024-11-27 07:28:54.881895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.839 [2024-11-27 07:28:54.881903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.839 [2024-11-27 07:28:54.881920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.839 qpair failed and we were unable to recover it. 00:33:43.839 [2024-11-27 07:28:54.891910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.839 [2024-11-27 07:28:54.891975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.839 [2024-11-27 07:28:54.891992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.839 [2024-11-27 07:28:54.891999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.839 [2024-11-27 07:28:54.892007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.839 [2024-11-27 07:28:54.892023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.839 qpair failed and we were unable to recover it. 00:33:43.839 [2024-11-27 07:28:54.901844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.839 [2024-11-27 07:28:54.901933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.839 [2024-11-27 07:28:54.901950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.839 [2024-11-27 07:28:54.901963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.839 [2024-11-27 07:28:54.901971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.839 [2024-11-27 07:28:54.901988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.839 qpair failed and we were unable to recover it. 00:33:43.839 [2024-11-27 07:28:54.911928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.839 [2024-11-27 07:28:54.911992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.839 [2024-11-27 07:28:54.912010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.839 [2024-11-27 07:28:54.912018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.839 [2024-11-27 07:28:54.912026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.839 [2024-11-27 07:28:54.912043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.839 qpair failed and we were unable to recover it. 00:33:43.839 [2024-11-27 07:28:54.921949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.839 [2024-11-27 07:28:54.922028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.839 [2024-11-27 07:28:54.922045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.839 [2024-11-27 07:28:54.922052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.839 [2024-11-27 07:28:54.922058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.839 [2024-11-27 07:28:54.922074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.839 qpair failed and we were unable to recover it. 00:33:43.839 [2024-11-27 07:28:54.932016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.839 [2024-11-27 07:28:54.932092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.839 [2024-11-27 07:28:54.932108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.839 [2024-11-27 07:28:54.932115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.840 [2024-11-27 07:28:54.932122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.840 [2024-11-27 07:28:54.932138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.840 qpair failed and we were unable to recover it. 00:33:43.840 [2024-11-27 07:28:54.942012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.840 [2024-11-27 07:28:54.942081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.840 [2024-11-27 07:28:54.942097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.840 [2024-11-27 07:28:54.942105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.840 [2024-11-27 07:28:54.942112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.840 [2024-11-27 07:28:54.942127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.840 qpair failed and we were unable to recover it. 00:33:43.840 [2024-11-27 07:28:54.952038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.840 [2024-11-27 07:28:54.952099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.840 [2024-11-27 07:28:54.952116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.840 [2024-11-27 07:28:54.952124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.840 [2024-11-27 07:28:54.952131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.840 [2024-11-27 07:28:54.952147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.840 qpair failed and we were unable to recover it. 00:33:43.840 [2024-11-27 07:28:54.962070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.840 [2024-11-27 07:28:54.962134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.840 [2024-11-27 07:28:54.962151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.840 [2024-11-27 07:28:54.962163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.840 [2024-11-27 07:28:54.962170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.840 [2024-11-27 07:28:54.962186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.840 qpair failed and we were unable to recover it. 00:33:43.840 [2024-11-27 07:28:54.972142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.840 [2024-11-27 07:28:54.972223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.840 [2024-11-27 07:28:54.972240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.840 [2024-11-27 07:28:54.972248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.840 [2024-11-27 07:28:54.972254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.840 [2024-11-27 07:28:54.972271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.840 qpair failed and we were unable to recover it. 00:33:43.840 [2024-11-27 07:28:54.982150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.840 [2024-11-27 07:28:54.982213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.840 [2024-11-27 07:28:54.982230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.840 [2024-11-27 07:28:54.982237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.840 [2024-11-27 07:28:54.982244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.840 [2024-11-27 07:28:54.982260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.840 qpair failed and we were unable to recover it. 00:33:43.840 [2024-11-27 07:28:54.992151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.840 [2024-11-27 07:28:54.992232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.840 [2024-11-27 07:28:54.992251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.840 [2024-11-27 07:28:54.992259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.840 [2024-11-27 07:28:54.992266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.840 [2024-11-27 07:28:54.992282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.840 qpair failed and we were unable to recover it. 00:33:43.840 [2024-11-27 07:28:55.002208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.840 [2024-11-27 07:28:55.002313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.840 [2024-11-27 07:28:55.002329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.840 [2024-11-27 07:28:55.002337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.840 [2024-11-27 07:28:55.002343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.840 [2024-11-27 07:28:55.002360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.840 qpair failed and we were unable to recover it. 00:33:43.840 [2024-11-27 07:28:55.012227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.840 [2024-11-27 07:28:55.012292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.840 [2024-11-27 07:28:55.012309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.840 [2024-11-27 07:28:55.012317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.840 [2024-11-27 07:28:55.012324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.840 [2024-11-27 07:28:55.012340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.840 qpair failed and we were unable to recover it. 00:33:43.840 [2024-11-27 07:28:55.022240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.840 [2024-11-27 07:28:55.022321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.840 [2024-11-27 07:28:55.022370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.840 [2024-11-27 07:28:55.022378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.840 [2024-11-27 07:28:55.022385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.840 [2024-11-27 07:28:55.022415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.840 qpair failed and we were unable to recover it. 00:33:43.840 [2024-11-27 07:28:55.032245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.840 [2024-11-27 07:28:55.032304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.840 [2024-11-27 07:28:55.032323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.840 [2024-11-27 07:28:55.032336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.841 [2024-11-27 07:28:55.032343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:43.841 [2024-11-27 07:28:55.032361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:43.841 qpair failed and we were unable to recover it. 00:33:44.104 [2024-11-27 07:28:55.042351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.104 [2024-11-27 07:28:55.042415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.104 [2024-11-27 07:28:55.042432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.104 [2024-11-27 07:28:55.042440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.104 [2024-11-27 07:28:55.042447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.104 [2024-11-27 07:28:55.042464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.104 qpair failed and we were unable to recover it. 00:33:44.104 [2024-11-27 07:28:55.052354] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.104 [2024-11-27 07:28:55.052424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.104 [2024-11-27 07:28:55.052440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.104 [2024-11-27 07:28:55.052447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.104 [2024-11-27 07:28:55.052454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.104 [2024-11-27 07:28:55.052470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.104 qpair failed and we were unable to recover it. 00:33:44.104 [2024-11-27 07:28:55.062300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.104 [2024-11-27 07:28:55.062356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.104 [2024-11-27 07:28:55.062372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.104 [2024-11-27 07:28:55.062379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.104 [2024-11-27 07:28:55.062385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.104 [2024-11-27 07:28:55.062402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.104 qpair failed and we were unable to recover it. 00:33:44.104 [2024-11-27 07:28:55.072371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.104 [2024-11-27 07:28:55.072427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.104 [2024-11-27 07:28:55.072444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.104 [2024-11-27 07:28:55.072451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.104 [2024-11-27 07:28:55.072458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.104 [2024-11-27 07:28:55.072479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.104 qpair failed and we were unable to recover it. 00:33:44.104 [2024-11-27 07:28:55.082318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.104 [2024-11-27 07:28:55.082381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.104 [2024-11-27 07:28:55.082397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.104 [2024-11-27 07:28:55.082404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.104 [2024-11-27 07:28:55.082411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.104 [2024-11-27 07:28:55.082427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.104 qpair failed and we were unable to recover it. 00:33:44.104 [2024-11-27 07:28:55.092470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.104 [2024-11-27 07:28:55.092540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.104 [2024-11-27 07:28:55.092556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.104 [2024-11-27 07:28:55.092564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.104 [2024-11-27 07:28:55.092570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.104 [2024-11-27 07:28:55.092586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.104 qpair failed and we were unable to recover it. 00:33:44.104 [2024-11-27 07:28:55.102413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.104 [2024-11-27 07:28:55.102497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.104 [2024-11-27 07:28:55.102512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.104 [2024-11-27 07:28:55.102519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.104 [2024-11-27 07:28:55.102525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.104 [2024-11-27 07:28:55.102540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.104 qpair failed and we were unable to recover it. 00:33:44.104 [2024-11-27 07:28:55.112460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.104 [2024-11-27 07:28:55.112512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.104 [2024-11-27 07:28:55.112527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.105 [2024-11-27 07:28:55.112534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.105 [2024-11-27 07:28:55.112540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.105 [2024-11-27 07:28:55.112558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.105 qpair failed and we were unable to recover it. 00:33:44.105 [2024-11-27 07:28:55.122394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.105 [2024-11-27 07:28:55.122459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.105 [2024-11-27 07:28:55.122474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.105 [2024-11-27 07:28:55.122481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.105 [2024-11-27 07:28:55.122488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.105 [2024-11-27 07:28:55.122504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.105 qpair failed and we were unable to recover it. 00:33:44.105 [2024-11-27 07:28:55.132511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.105 [2024-11-27 07:28:55.132565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.105 [2024-11-27 07:28:55.132580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.105 [2024-11-27 07:28:55.132588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.105 [2024-11-27 07:28:55.132597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.105 [2024-11-27 07:28:55.132612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.105 qpair failed and we were unable to recover it. 00:33:44.105 [2024-11-27 07:28:55.142517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.105 [2024-11-27 07:28:55.142618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.105 [2024-11-27 07:28:55.142633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.105 [2024-11-27 07:28:55.142640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.105 [2024-11-27 07:28:55.142646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.105 [2024-11-27 07:28:55.142661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.105 qpair failed and we were unable to recover it. 00:33:44.105 [2024-11-27 07:28:55.152612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.105 [2024-11-27 07:28:55.152678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.105 [2024-11-27 07:28:55.152692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.105 [2024-11-27 07:28:55.152699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.105 [2024-11-27 07:28:55.152706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.105 [2024-11-27 07:28:55.152720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.105 qpair failed and we were unable to recover it. 00:33:44.105 [2024-11-27 07:28:55.162511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.105 [2024-11-27 07:28:55.162575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.105 [2024-11-27 07:28:55.162593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.105 [2024-11-27 07:28:55.162600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.105 [2024-11-27 07:28:55.162607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.105 [2024-11-27 07:28:55.162622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.105 qpair failed and we were unable to recover it. 00:33:44.105 [2024-11-27 07:28:55.172678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.105 [2024-11-27 07:28:55.172737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.105 [2024-11-27 07:28:55.172751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.105 [2024-11-27 07:28:55.172758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.105 [2024-11-27 07:28:55.172765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.105 [2024-11-27 07:28:55.172779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.105 qpair failed and we were unable to recover it. 00:33:44.105 [2024-11-27 07:28:55.182616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.105 [2024-11-27 07:28:55.182662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.105 [2024-11-27 07:28:55.182676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.105 [2024-11-27 07:28:55.182683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.105 [2024-11-27 07:28:55.182689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.105 [2024-11-27 07:28:55.182704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.105 qpair failed and we were unable to recover it. 00:33:44.105 [2024-11-27 07:28:55.192669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.105 [2024-11-27 07:28:55.192720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.105 [2024-11-27 07:28:55.192735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.105 [2024-11-27 07:28:55.192742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.105 [2024-11-27 07:28:55.192748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.105 [2024-11-27 07:28:55.192763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.105 qpair failed and we were unable to recover it. 00:33:44.105 [2024-11-27 07:28:55.202716] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.105 [2024-11-27 07:28:55.202772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.105 [2024-11-27 07:28:55.202787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.105 [2024-11-27 07:28:55.202794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.105 [2024-11-27 07:28:55.202805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.105 [2024-11-27 07:28:55.202820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.105 qpair failed and we were unable to recover it. 00:33:44.105 [2024-11-27 07:28:55.212775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.105 [2024-11-27 07:28:55.212847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.105 [2024-11-27 07:28:55.212862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.105 [2024-11-27 07:28:55.212869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.105 [2024-11-27 07:28:55.212875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.105 [2024-11-27 07:28:55.212890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.105 qpair failed and we were unable to recover it. 00:33:44.106 [2024-11-27 07:28:55.222738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.106 [2024-11-27 07:28:55.222797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.106 [2024-11-27 07:28:55.222811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.106 [2024-11-27 07:28:55.222818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.106 [2024-11-27 07:28:55.222824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.106 [2024-11-27 07:28:55.222839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.106 qpair failed and we were unable to recover it. 00:33:44.106 [2024-11-27 07:28:55.232725] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.106 [2024-11-27 07:28:55.232774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.106 [2024-11-27 07:28:55.232788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.106 [2024-11-27 07:28:55.232795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.106 [2024-11-27 07:28:55.232801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.106 [2024-11-27 07:28:55.232815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.106 qpair failed and we were unable to recover it. 00:33:44.106 [2024-11-27 07:28:55.242828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.106 [2024-11-27 07:28:55.242886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.106 [2024-11-27 07:28:55.242899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.106 [2024-11-27 07:28:55.242906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.106 [2024-11-27 07:28:55.242912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.106 [2024-11-27 07:28:55.242926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.106 qpair failed and we were unable to recover it. 00:33:44.106 [2024-11-27 07:28:55.252877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.106 [2024-11-27 07:28:55.252933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.106 [2024-11-27 07:28:55.252947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.106 [2024-11-27 07:28:55.252954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.106 [2024-11-27 07:28:55.252961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.106 [2024-11-27 07:28:55.252975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.106 qpair failed and we were unable to recover it. 00:33:44.106 [2024-11-27 07:28:55.262865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.106 [2024-11-27 07:28:55.262917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.106 [2024-11-27 07:28:55.262942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.106 [2024-11-27 07:28:55.262951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.106 [2024-11-27 07:28:55.262958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.106 [2024-11-27 07:28:55.262978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.106 qpair failed and we were unable to recover it. 00:33:44.106 [2024-11-27 07:28:55.272927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.106 [2024-11-27 07:28:55.272992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.106 [2024-11-27 07:28:55.273016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.106 [2024-11-27 07:28:55.273025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.106 [2024-11-27 07:28:55.273032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.106 [2024-11-27 07:28:55.273052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.106 qpair failed and we were unable to recover it. 00:33:44.106 [2024-11-27 07:28:55.282971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.106 [2024-11-27 07:28:55.283029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.106 [2024-11-27 07:28:55.283044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.106 [2024-11-27 07:28:55.283051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.106 [2024-11-27 07:28:55.283058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.106 [2024-11-27 07:28:55.283073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.106 qpair failed and we were unable to recover it. 00:33:44.106 [2024-11-27 07:28:55.292987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.106 [2024-11-27 07:28:55.293043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.106 [2024-11-27 07:28:55.293061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.106 [2024-11-27 07:28:55.293068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.106 [2024-11-27 07:28:55.293075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.106 [2024-11-27 07:28:55.293090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.106 qpair failed and we were unable to recover it. 00:33:44.106 [2024-11-27 07:28:55.302928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.106 [2024-11-27 07:28:55.302978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.106 [2024-11-27 07:28:55.302991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.106 [2024-11-27 07:28:55.302998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.106 [2024-11-27 07:28:55.303004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.106 [2024-11-27 07:28:55.303018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.106 qpair failed and we were unable to recover it. 00:33:44.369 [2024-11-27 07:28:55.312891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.369 [2024-11-27 07:28:55.312944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.369 [2024-11-27 07:28:55.312957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.369 [2024-11-27 07:28:55.312964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.369 [2024-11-27 07:28:55.312971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.369 [2024-11-27 07:28:55.312985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.369 qpair failed and we were unable to recover it. 00:33:44.369 [2024-11-27 07:28:55.323064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.369 [2024-11-27 07:28:55.323120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.369 [2024-11-27 07:28:55.323133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.369 [2024-11-27 07:28:55.323140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.369 [2024-11-27 07:28:55.323146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.369 [2024-11-27 07:28:55.323164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.369 qpair failed and we were unable to recover it. 00:33:44.369 [2024-11-27 07:28:55.333096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.369 [2024-11-27 07:28:55.333148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.369 [2024-11-27 07:28:55.333166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.369 [2024-11-27 07:28:55.333173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.369 [2024-11-27 07:28:55.333184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.369 [2024-11-27 07:28:55.333199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.369 qpair failed and we were unable to recover it. 00:33:44.369 [2024-11-27 07:28:55.343067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.369 [2024-11-27 07:28:55.343117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.369 [2024-11-27 07:28:55.343130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.369 [2024-11-27 07:28:55.343137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.369 [2024-11-27 07:28:55.343143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.369 [2024-11-27 07:28:55.343162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.369 qpair failed and we were unable to recover it. 00:33:44.369 [2024-11-27 07:28:55.353102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.369 [2024-11-27 07:28:55.353180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.369 [2024-11-27 07:28:55.353193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.369 [2024-11-27 07:28:55.353200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.369 [2024-11-27 07:28:55.353214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.369 [2024-11-27 07:28:55.353228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.369 qpair failed and we were unable to recover it. 00:33:44.369 [2024-11-27 07:28:55.363175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.369 [2024-11-27 07:28:55.363230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.369 [2024-11-27 07:28:55.363243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.369 [2024-11-27 07:28:55.363251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.369 [2024-11-27 07:28:55.363258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.369 [2024-11-27 07:28:55.363273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.369 qpair failed and we were unable to recover it. 00:33:44.369 [2024-11-27 07:28:55.373208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.369 [2024-11-27 07:28:55.373290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.369 [2024-11-27 07:28:55.373303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.369 [2024-11-27 07:28:55.373311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.369 [2024-11-27 07:28:55.373318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.369 [2024-11-27 07:28:55.373333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.369 qpair failed and we were unable to recover it. 00:33:44.369 [2024-11-27 07:28:55.383201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.369 [2024-11-27 07:28:55.383247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.369 [2024-11-27 07:28:55.383261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.370 [2024-11-27 07:28:55.383268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.370 [2024-11-27 07:28:55.383275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.370 [2024-11-27 07:28:55.383289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.370 qpair failed and we were unable to recover it. 00:33:44.370 [2024-11-27 07:28:55.393209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.370 [2024-11-27 07:28:55.393260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.370 [2024-11-27 07:28:55.393273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.370 [2024-11-27 07:28:55.393281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.370 [2024-11-27 07:28:55.393288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.370 [2024-11-27 07:28:55.393302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.370 qpair failed and we were unable to recover it. 00:33:44.370 [2024-11-27 07:28:55.403294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.370 [2024-11-27 07:28:55.403349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.370 [2024-11-27 07:28:55.403362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.370 [2024-11-27 07:28:55.403370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.370 [2024-11-27 07:28:55.403376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.370 [2024-11-27 07:28:55.403390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.370 qpair failed and we were unable to recover it. 00:33:44.370 [2024-11-27 07:28:55.413372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.370 [2024-11-27 07:28:55.413432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.370 [2024-11-27 07:28:55.413446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.370 [2024-11-27 07:28:55.413453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.370 [2024-11-27 07:28:55.413459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.370 [2024-11-27 07:28:55.413473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.370 qpair failed and we were unable to recover it. 00:33:44.370 [2024-11-27 07:28:55.423193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.370 [2024-11-27 07:28:55.423244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.370 [2024-11-27 07:28:55.423260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.370 [2024-11-27 07:28:55.423267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.370 [2024-11-27 07:28:55.423274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.370 [2024-11-27 07:28:55.423288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.370 qpair failed and we were unable to recover it. 00:33:44.370 [2024-11-27 07:28:55.433336] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.370 [2024-11-27 07:28:55.433388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.370 [2024-11-27 07:28:55.433401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.370 [2024-11-27 07:28:55.433408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.370 [2024-11-27 07:28:55.433414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.370 [2024-11-27 07:28:55.433428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.370 qpair failed and we were unable to recover it. 00:33:44.370 [2024-11-27 07:28:55.443415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.370 [2024-11-27 07:28:55.443496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.370 [2024-11-27 07:28:55.443509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.370 [2024-11-27 07:28:55.443516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.370 [2024-11-27 07:28:55.443523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.370 [2024-11-27 07:28:55.443536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.370 qpair failed and we were unable to recover it. 00:33:44.370 [2024-11-27 07:28:55.453406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.370 [2024-11-27 07:28:55.453461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.370 [2024-11-27 07:28:55.453476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.370 [2024-11-27 07:28:55.453483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.370 [2024-11-27 07:28:55.453490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.370 [2024-11-27 07:28:55.453505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.370 qpair failed and we were unable to recover it. 00:33:44.370 [2024-11-27 07:28:55.463364] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.370 [2024-11-27 07:28:55.463411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.370 [2024-11-27 07:28:55.463425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.370 [2024-11-27 07:28:55.463435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.370 [2024-11-27 07:28:55.463441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.370 [2024-11-27 07:28:55.463455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.370 qpair failed and we were unable to recover it. 00:33:44.370 [2024-11-27 07:28:55.473340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.370 [2024-11-27 07:28:55.473386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.370 [2024-11-27 07:28:55.473399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.370 [2024-11-27 07:28:55.473406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.370 [2024-11-27 07:28:55.473412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.370 [2024-11-27 07:28:55.473425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.370 qpair failed and we were unable to recover it. 00:33:44.370 [2024-11-27 07:28:55.483540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.370 [2024-11-27 07:28:55.483613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.370 [2024-11-27 07:28:55.483626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.370 [2024-11-27 07:28:55.483633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.370 [2024-11-27 07:28:55.483639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.370 [2024-11-27 07:28:55.483653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.370 qpair failed and we were unable to recover it. 00:33:44.370 [2024-11-27 07:28:55.493495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.370 [2024-11-27 07:28:55.493546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.370 [2024-11-27 07:28:55.493559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.370 [2024-11-27 07:28:55.493567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.370 [2024-11-27 07:28:55.493574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.370 [2024-11-27 07:28:55.493589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.370 qpair failed and we were unable to recover it. 00:33:44.370 [2024-11-27 07:28:55.503530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.370 [2024-11-27 07:28:55.503575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.370 [2024-11-27 07:28:55.503588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.370 [2024-11-27 07:28:55.503595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.370 [2024-11-27 07:28:55.503602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.370 [2024-11-27 07:28:55.503615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.370 qpair failed and we were unable to recover it. 00:33:44.370 [2024-11-27 07:28:55.513500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.370 [2024-11-27 07:28:55.513572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.370 [2024-11-27 07:28:55.513585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.371 [2024-11-27 07:28:55.513592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.371 [2024-11-27 07:28:55.513599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.371 [2024-11-27 07:28:55.513612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.371 qpair failed and we were unable to recover it. 00:33:44.371 [2024-11-27 07:28:55.523600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.371 [2024-11-27 07:28:55.523657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.371 [2024-11-27 07:28:55.523669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.371 [2024-11-27 07:28:55.523676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.371 [2024-11-27 07:28:55.523683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.371 [2024-11-27 07:28:55.523696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.371 qpair failed and we were unable to recover it. 00:33:44.371 [2024-11-27 07:28:55.533589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.371 [2024-11-27 07:28:55.533638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.371 [2024-11-27 07:28:55.533651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.371 [2024-11-27 07:28:55.533658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.371 [2024-11-27 07:28:55.533664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.371 [2024-11-27 07:28:55.533678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.371 qpair failed and we were unable to recover it. 00:33:44.371 [2024-11-27 07:28:55.543606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.371 [2024-11-27 07:28:55.543657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.371 [2024-11-27 07:28:55.543670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.371 [2024-11-27 07:28:55.543677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.371 [2024-11-27 07:28:55.543683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.371 [2024-11-27 07:28:55.543697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.371 qpair failed and we were unable to recover it. 00:33:44.371 [2024-11-27 07:28:55.553640] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.371 [2024-11-27 07:28:55.553689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.371 [2024-11-27 07:28:55.553702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.371 [2024-11-27 07:28:55.553709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.371 [2024-11-27 07:28:55.553716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.371 [2024-11-27 07:28:55.553729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.371 qpair failed and we were unable to recover it. 00:33:44.371 [2024-11-27 07:28:55.563709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.371 [2024-11-27 07:28:55.563791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.371 [2024-11-27 07:28:55.563804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.371 [2024-11-27 07:28:55.563811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.371 [2024-11-27 07:28:55.563817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.371 [2024-11-27 07:28:55.563831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.371 qpair failed and we were unable to recover it. 00:33:44.633 [2024-11-27 07:28:55.573720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.633 [2024-11-27 07:28:55.573767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.633 [2024-11-27 07:28:55.573781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.633 [2024-11-27 07:28:55.573788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.633 [2024-11-27 07:28:55.573794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.633 [2024-11-27 07:28:55.573808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.633 qpair failed and we were unable to recover it. 00:33:44.633 [2024-11-27 07:28:55.583711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.633 [2024-11-27 07:28:55.583757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.633 [2024-11-27 07:28:55.583771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.633 [2024-11-27 07:28:55.583778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.633 [2024-11-27 07:28:55.583784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.633 [2024-11-27 07:28:55.583798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.633 qpair failed and we were unable to recover it. 00:33:44.633 [2024-11-27 07:28:55.593743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.633 [2024-11-27 07:28:55.593794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.633 [2024-11-27 07:28:55.593807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.633 [2024-11-27 07:28:55.593818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.633 [2024-11-27 07:28:55.593825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.633 [2024-11-27 07:28:55.593838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.633 qpair failed and we were unable to recover it. 00:33:44.633 [2024-11-27 07:28:55.603819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.633 [2024-11-27 07:28:55.603913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.633 [2024-11-27 07:28:55.603926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.633 [2024-11-27 07:28:55.603933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.633 [2024-11-27 07:28:55.603940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.633 [2024-11-27 07:28:55.603953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.633 qpair failed and we were unable to recover it. 00:33:44.633 [2024-11-27 07:28:55.613812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.633 [2024-11-27 07:28:55.613865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.633 [2024-11-27 07:28:55.613890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.633 [2024-11-27 07:28:55.613899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.633 [2024-11-27 07:28:55.613905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.633 [2024-11-27 07:28:55.613925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.633 qpair failed and we were unable to recover it. 00:33:44.633 [2024-11-27 07:28:55.623823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.633 [2024-11-27 07:28:55.623880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.634 [2024-11-27 07:28:55.623905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.634 [2024-11-27 07:28:55.623914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.634 [2024-11-27 07:28:55.623921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.634 [2024-11-27 07:28:55.623940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.634 qpair failed and we were unable to recover it. 00:33:44.634 [2024-11-27 07:28:55.633840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.634 [2024-11-27 07:28:55.633892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.634 [2024-11-27 07:28:55.633917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.634 [2024-11-27 07:28:55.633926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.634 [2024-11-27 07:28:55.633933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.634 [2024-11-27 07:28:55.633957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.634 qpair failed and we were unable to recover it. 00:33:44.634 [2024-11-27 07:28:55.643919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.634 [2024-11-27 07:28:55.643973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.634 [2024-11-27 07:28:55.643988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.634 [2024-11-27 07:28:55.643996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.634 [2024-11-27 07:28:55.644002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.634 [2024-11-27 07:28:55.644017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.634 qpair failed and we were unable to recover it. 00:33:44.634 [2024-11-27 07:28:55.653938] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.634 [2024-11-27 07:28:55.653989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.634 [2024-11-27 07:28:55.654002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.634 [2024-11-27 07:28:55.654009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.634 [2024-11-27 07:28:55.654016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.634 [2024-11-27 07:28:55.654030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.634 qpair failed and we were unable to recover it. 00:33:44.634 [2024-11-27 07:28:55.663938] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.634 [2024-11-27 07:28:55.663986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.634 [2024-11-27 07:28:55.664000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.634 [2024-11-27 07:28:55.664007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.634 [2024-11-27 07:28:55.664013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.634 [2024-11-27 07:28:55.664028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.634 qpair failed and we were unable to recover it. 00:33:44.634 [2024-11-27 07:28:55.673859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.634 [2024-11-27 07:28:55.673921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.634 [2024-11-27 07:28:55.673935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.634 [2024-11-27 07:28:55.673942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.634 [2024-11-27 07:28:55.673948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.634 [2024-11-27 07:28:55.673962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.634 qpair failed and we were unable to recover it. 00:33:44.634 [2024-11-27 07:28:55.684010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.634 [2024-11-27 07:28:55.684062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.634 [2024-11-27 07:28:55.684076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.634 [2024-11-27 07:28:55.684083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.634 [2024-11-27 07:28:55.684090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.634 [2024-11-27 07:28:55.684104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.634 qpair failed and we were unable to recover it. 00:33:44.634 [2024-11-27 07:28:55.694008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.634 [2024-11-27 07:28:55.694055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.634 [2024-11-27 07:28:55.694069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.634 [2024-11-27 07:28:55.694076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.634 [2024-11-27 07:28:55.694082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.634 [2024-11-27 07:28:55.694097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.634 qpair failed and we were unable to recover it. 00:33:44.634 [2024-11-27 07:28:55.704058] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.634 [2024-11-27 07:28:55.704108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.634 [2024-11-27 07:28:55.704122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.634 [2024-11-27 07:28:55.704129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.634 [2024-11-27 07:28:55.704136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.634 [2024-11-27 07:28:55.704150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.634 qpair failed and we were unable to recover it. 00:33:44.634 [2024-11-27 07:28:55.714091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.634 [2024-11-27 07:28:55.714186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.634 [2024-11-27 07:28:55.714199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.634 [2024-11-27 07:28:55.714206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.634 [2024-11-27 07:28:55.714214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.634 [2024-11-27 07:28:55.714228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.634 qpair failed and we were unable to recover it. 00:33:44.634 [2024-11-27 07:28:55.724148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.634 [2024-11-27 07:28:55.724209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.634 [2024-11-27 07:28:55.724226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.634 [2024-11-27 07:28:55.724233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.634 [2024-11-27 07:28:55.724240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.634 [2024-11-27 07:28:55.724254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.634 qpair failed and we were unable to recover it. 00:33:44.634 [2024-11-27 07:28:55.734148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.634 [2024-11-27 07:28:55.734204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.634 [2024-11-27 07:28:55.734218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.634 [2024-11-27 07:28:55.734225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.634 [2024-11-27 07:28:55.734231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.634 [2024-11-27 07:28:55.734246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.634 qpair failed and we were unable to recover it. 00:33:44.634 [2024-11-27 07:28:55.744180] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.634 [2024-11-27 07:28:55.744230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.634 [2024-11-27 07:28:55.744243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.634 [2024-11-27 07:28:55.744251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.634 [2024-11-27 07:28:55.744257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.634 [2024-11-27 07:28:55.744272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.634 qpair failed and we were unable to recover it. 00:33:44.634 [2024-11-27 07:28:55.754171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.634 [2024-11-27 07:28:55.754219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.635 [2024-11-27 07:28:55.754232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.635 [2024-11-27 07:28:55.754239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.635 [2024-11-27 07:28:55.754245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.635 [2024-11-27 07:28:55.754259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.635 qpair failed and we were unable to recover it. 00:33:44.635 [2024-11-27 07:28:55.764270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.635 [2024-11-27 07:28:55.764329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.635 [2024-11-27 07:28:55.764343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.635 [2024-11-27 07:28:55.764350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.635 [2024-11-27 07:28:55.764360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.635 [2024-11-27 07:28:55.764374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.635 qpair failed and we were unable to recover it. 00:33:44.635 [2024-11-27 07:28:55.774262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.635 [2024-11-27 07:28:55.774314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.635 [2024-11-27 07:28:55.774327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.635 [2024-11-27 07:28:55.774334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.635 [2024-11-27 07:28:55.774340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.635 [2024-11-27 07:28:55.774354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.635 qpair failed and we were unable to recover it. 00:33:44.635 [2024-11-27 07:28:55.784274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.635 [2024-11-27 07:28:55.784321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.635 [2024-11-27 07:28:55.784335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.635 [2024-11-27 07:28:55.784342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.635 [2024-11-27 07:28:55.784348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.635 [2024-11-27 07:28:55.784362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.635 qpair failed and we were unable to recover it. 00:33:44.635 [2024-11-27 07:28:55.794268] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.635 [2024-11-27 07:28:55.794319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.635 [2024-11-27 07:28:55.794332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.635 [2024-11-27 07:28:55.794339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.635 [2024-11-27 07:28:55.794346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.635 [2024-11-27 07:28:55.794360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.635 qpair failed and we were unable to recover it. 00:33:44.635 [2024-11-27 07:28:55.804371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.635 [2024-11-27 07:28:55.804423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.635 [2024-11-27 07:28:55.804436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.635 [2024-11-27 07:28:55.804443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.635 [2024-11-27 07:28:55.804449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.635 [2024-11-27 07:28:55.804463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.635 qpair failed and we were unable to recover it. 00:33:44.635 [2024-11-27 07:28:55.814331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.635 [2024-11-27 07:28:55.814411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.635 [2024-11-27 07:28:55.814424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.635 [2024-11-27 07:28:55.814431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.635 [2024-11-27 07:28:55.814437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.635 [2024-11-27 07:28:55.814451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.635 qpair failed and we were unable to recover it. 00:33:44.635 [2024-11-27 07:28:55.824418] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.635 [2024-11-27 07:28:55.824507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.635 [2024-11-27 07:28:55.824520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.635 [2024-11-27 07:28:55.824527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.635 [2024-11-27 07:28:55.824534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.635 [2024-11-27 07:28:55.824548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.635 qpair failed and we were unable to recover it. 00:33:44.635 [2024-11-27 07:28:55.834408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.635 [2024-11-27 07:28:55.834458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.635 [2024-11-27 07:28:55.834471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.635 [2024-11-27 07:28:55.834479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.635 [2024-11-27 07:28:55.834485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.635 [2024-11-27 07:28:55.834499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.635 qpair failed and we were unable to recover it. 00:33:44.897 [2024-11-27 07:28:55.844485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.897 [2024-11-27 07:28:55.844565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.897 [2024-11-27 07:28:55.844578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.897 [2024-11-27 07:28:55.844585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.897 [2024-11-27 07:28:55.844591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.897 [2024-11-27 07:28:55.844605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.897 qpair failed and we were unable to recover it. 00:33:44.897 [2024-11-27 07:28:55.854477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.897 [2024-11-27 07:28:55.854530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.897 [2024-11-27 07:28:55.854547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.897 [2024-11-27 07:28:55.854554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.897 [2024-11-27 07:28:55.854560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.897 [2024-11-27 07:28:55.854574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.897 qpair failed and we were unable to recover it. 00:33:44.897 [2024-11-27 07:28:55.864456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.897 [2024-11-27 07:28:55.864504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.897 [2024-11-27 07:28:55.864517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.897 [2024-11-27 07:28:55.864525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.897 [2024-11-27 07:28:55.864531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.897 [2024-11-27 07:28:55.864545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.897 qpair failed and we were unable to recover it. 00:33:44.897 [2024-11-27 07:28:55.874514] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.898 [2024-11-27 07:28:55.874560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.898 [2024-11-27 07:28:55.874573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.898 [2024-11-27 07:28:55.874580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.898 [2024-11-27 07:28:55.874586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.898 [2024-11-27 07:28:55.874600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.898 qpair failed and we were unable to recover it. 00:33:44.898 [2024-11-27 07:28:55.884573] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.898 [2024-11-27 07:28:55.884625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.898 [2024-11-27 07:28:55.884639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.898 [2024-11-27 07:28:55.884645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.898 [2024-11-27 07:28:55.884652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.898 [2024-11-27 07:28:55.884665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.898 qpair failed and we were unable to recover it. 00:33:44.898 [2024-11-27 07:28:55.894575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.898 [2024-11-27 07:28:55.894631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.898 [2024-11-27 07:28:55.894644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.898 [2024-11-27 07:28:55.894652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.898 [2024-11-27 07:28:55.894661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.898 [2024-11-27 07:28:55.894676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.898 qpair failed and we were unable to recover it. 00:33:44.898 [2024-11-27 07:28:55.904487] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.898 [2024-11-27 07:28:55.904534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.898 [2024-11-27 07:28:55.904547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.898 [2024-11-27 07:28:55.904555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.898 [2024-11-27 07:28:55.904561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.898 [2024-11-27 07:28:55.904575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.898 qpair failed and we were unable to recover it. 00:33:44.898 [2024-11-27 07:28:55.914618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.898 [2024-11-27 07:28:55.914672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.898 [2024-11-27 07:28:55.914686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.898 [2024-11-27 07:28:55.914693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.898 [2024-11-27 07:28:55.914699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.898 [2024-11-27 07:28:55.914713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.898 qpair failed and we were unable to recover it. 00:33:44.898 [2024-11-27 07:28:55.924703] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.898 [2024-11-27 07:28:55.924758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.898 [2024-11-27 07:28:55.924771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.898 [2024-11-27 07:28:55.924778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.898 [2024-11-27 07:28:55.924784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.898 [2024-11-27 07:28:55.924798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.898 qpair failed and we were unable to recover it. 00:33:44.898 [2024-11-27 07:28:55.934676] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.898 [2024-11-27 07:28:55.934724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.898 [2024-11-27 07:28:55.934737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.898 [2024-11-27 07:28:55.934744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.898 [2024-11-27 07:28:55.934750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.898 [2024-11-27 07:28:55.934764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.898 qpair failed and we were unable to recover it. 00:33:44.898 [2024-11-27 07:28:55.944687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.898 [2024-11-27 07:28:55.944746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.898 [2024-11-27 07:28:55.944759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.898 [2024-11-27 07:28:55.944766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.898 [2024-11-27 07:28:55.944773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.898 [2024-11-27 07:28:55.944787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.898 qpair failed and we were unable to recover it. 00:33:44.898 [2024-11-27 07:28:55.954822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.898 [2024-11-27 07:28:55.954884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.898 [2024-11-27 07:28:55.954897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.898 [2024-11-27 07:28:55.954904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.898 [2024-11-27 07:28:55.954911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.898 [2024-11-27 07:28:55.954925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.898 qpair failed and we were unable to recover it. 00:33:44.898 [2024-11-27 07:28:55.964822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.898 [2024-11-27 07:28:55.964928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.898 [2024-11-27 07:28:55.964941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.898 [2024-11-27 07:28:55.964948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.898 [2024-11-27 07:28:55.964956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.898 [2024-11-27 07:28:55.964970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.898 qpair failed and we were unable to recover it. 00:33:44.898 [2024-11-27 07:28:55.974786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.898 [2024-11-27 07:28:55.974834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.898 [2024-11-27 07:28:55.974847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.898 [2024-11-27 07:28:55.974854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.898 [2024-11-27 07:28:55.974861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.898 [2024-11-27 07:28:55.974875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.898 qpair failed and we were unable to recover it. 00:33:44.898 [2024-11-27 07:28:55.984841] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.898 [2024-11-27 07:28:55.984887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.898 [2024-11-27 07:28:55.984904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.898 [2024-11-27 07:28:55.984912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.898 [2024-11-27 07:28:55.984918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.898 [2024-11-27 07:28:55.984932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.898 qpair failed and we were unable to recover it. 00:33:44.898 [2024-11-27 07:28:55.994826] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.898 [2024-11-27 07:28:55.994896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.898 [2024-11-27 07:28:55.994909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.898 [2024-11-27 07:28:55.994917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.898 [2024-11-27 07:28:55.994924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.898 [2024-11-27 07:28:55.994938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.898 qpair failed and we were unable to recover it. 00:33:44.898 [2024-11-27 07:28:56.004899] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.899 [2024-11-27 07:28:56.004952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.899 [2024-11-27 07:28:56.004965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.899 [2024-11-27 07:28:56.004972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.899 [2024-11-27 07:28:56.004979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.899 [2024-11-27 07:28:56.004992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.899 qpair failed and we were unable to recover it. 00:33:44.899 [2024-11-27 07:28:56.014891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.899 [2024-11-27 07:28:56.014944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.899 [2024-11-27 07:28:56.014957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.899 [2024-11-27 07:28:56.014964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.899 [2024-11-27 07:28:56.014971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.899 [2024-11-27 07:28:56.014984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.899 qpair failed and we were unable to recover it. 00:33:44.899 [2024-11-27 07:28:56.024908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.899 [2024-11-27 07:28:56.024955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.899 [2024-11-27 07:28:56.024968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.899 [2024-11-27 07:28:56.024978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.899 [2024-11-27 07:28:56.024985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.899 [2024-11-27 07:28:56.024999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.899 qpair failed and we were unable to recover it. 00:33:44.899 [2024-11-27 07:28:56.034889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.899 [2024-11-27 07:28:56.034959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.899 [2024-11-27 07:28:56.034972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.899 [2024-11-27 07:28:56.034978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.899 [2024-11-27 07:28:56.034985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.899 [2024-11-27 07:28:56.034999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.899 qpair failed and we were unable to recover it. 00:33:44.899 [2024-11-27 07:28:56.045011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.899 [2024-11-27 07:28:56.045062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.899 [2024-11-27 07:28:56.045075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.899 [2024-11-27 07:28:56.045082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.899 [2024-11-27 07:28:56.045088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.899 [2024-11-27 07:28:56.045102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.899 qpair failed and we were unable to recover it. 00:33:44.899 [2024-11-27 07:28:56.055000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.899 [2024-11-27 07:28:56.055053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.899 [2024-11-27 07:28:56.055066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.899 [2024-11-27 07:28:56.055073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.899 [2024-11-27 07:28:56.055080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.899 [2024-11-27 07:28:56.055094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.899 qpair failed and we were unable to recover it. 00:33:44.899 [2024-11-27 07:28:56.065014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.899 [2024-11-27 07:28:56.065064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.899 [2024-11-27 07:28:56.065077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.899 [2024-11-27 07:28:56.065084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.899 [2024-11-27 07:28:56.065090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.899 [2024-11-27 07:28:56.065104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.899 qpair failed and we were unable to recover it. 00:33:44.899 [2024-11-27 07:28:56.074994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.899 [2024-11-27 07:28:56.075040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.899 [2024-11-27 07:28:56.075054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.899 [2024-11-27 07:28:56.075061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.899 [2024-11-27 07:28:56.075067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.899 [2024-11-27 07:28:56.075081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.899 qpair failed and we were unable to recover it. 00:33:44.899 [2024-11-27 07:28:56.085029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.899 [2024-11-27 07:28:56.085082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.899 [2024-11-27 07:28:56.085096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.899 [2024-11-27 07:28:56.085103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.899 [2024-11-27 07:28:56.085109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.899 [2024-11-27 07:28:56.085123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.899 qpair failed and we were unable to recover it. 00:33:44.899 [2024-11-27 07:28:56.095091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.899 [2024-11-27 07:28:56.095162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.899 [2024-11-27 07:28:56.095177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.899 [2024-11-27 07:28:56.095184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.899 [2024-11-27 07:28:56.095191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:44.899 [2024-11-27 07:28:56.095206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:44.899 qpair failed and we were unable to recover it. 00:33:45.162 [2024-11-27 07:28:56.105002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.162 [2024-11-27 07:28:56.105047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.162 [2024-11-27 07:28:56.105061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.162 [2024-11-27 07:28:56.105068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.162 [2024-11-27 07:28:56.105074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.162 [2024-11-27 07:28:56.105088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.162 qpair failed and we were unable to recover it. 00:33:45.162 [2024-11-27 07:28:56.115166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.162 [2024-11-27 07:28:56.115218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.162 [2024-11-27 07:28:56.115232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.162 [2024-11-27 07:28:56.115239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.162 [2024-11-27 07:28:56.115245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.162 [2024-11-27 07:28:56.115259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.162 qpair failed and we were unable to recover it. 00:33:45.162 [2024-11-27 07:28:56.125222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.162 [2024-11-27 07:28:56.125281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.162 [2024-11-27 07:28:56.125294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.162 [2024-11-27 07:28:56.125301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.162 [2024-11-27 07:28:56.125308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.162 [2024-11-27 07:28:56.125322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.162 qpair failed and we were unable to recover it. 00:33:45.162 [2024-11-27 07:28:56.135234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.162 [2024-11-27 07:28:56.135285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.162 [2024-11-27 07:28:56.135298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.162 [2024-11-27 07:28:56.135305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.162 [2024-11-27 07:28:56.135311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.162 [2024-11-27 07:28:56.135326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.162 qpair failed and we were unable to recover it. 00:33:45.162 [2024-11-27 07:28:56.145254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.162 [2024-11-27 07:28:56.145301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.162 [2024-11-27 07:28:56.145313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.162 [2024-11-27 07:28:56.145320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.162 [2024-11-27 07:28:56.145327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.162 [2024-11-27 07:28:56.145341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.162 qpair failed and we were unable to recover it. 00:33:45.162 [2024-11-27 07:28:56.155254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.162 [2024-11-27 07:28:56.155303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.162 [2024-11-27 07:28:56.155316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.162 [2024-11-27 07:28:56.155327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.162 [2024-11-27 07:28:56.155337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.162 [2024-11-27 07:28:56.155352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.162 qpair failed and we were unable to recover it. 00:33:45.162 [2024-11-27 07:28:56.165313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.162 [2024-11-27 07:28:56.165366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.162 [2024-11-27 07:28:56.165379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.162 [2024-11-27 07:28:56.165386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.162 [2024-11-27 07:28:56.165393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.162 [2024-11-27 07:28:56.165409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.162 qpair failed and we were unable to recover it. 00:33:45.162 [2024-11-27 07:28:56.175345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.162 [2024-11-27 07:28:56.175396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.162 [2024-11-27 07:28:56.175409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.162 [2024-11-27 07:28:56.175417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.162 [2024-11-27 07:28:56.175423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.162 [2024-11-27 07:28:56.175438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.162 qpair failed and we were unable to recover it. 00:33:45.162 [2024-11-27 07:28:56.185380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.162 [2024-11-27 07:28:56.185437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.162 [2024-11-27 07:28:56.185451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.162 [2024-11-27 07:28:56.185459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.162 [2024-11-27 07:28:56.185465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.162 [2024-11-27 07:28:56.185480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.162 qpair failed and we were unable to recover it. 00:33:45.162 [2024-11-27 07:28:56.195402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.162 [2024-11-27 07:28:56.195449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.162 [2024-11-27 07:28:56.195462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.162 [2024-11-27 07:28:56.195469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.162 [2024-11-27 07:28:56.195476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.162 [2024-11-27 07:28:56.195493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.162 qpair failed and we were unable to recover it. 00:33:45.162 [2024-11-27 07:28:56.205429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.162 [2024-11-27 07:28:56.205484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.162 [2024-11-27 07:28:56.205497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.162 [2024-11-27 07:28:56.205504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.162 [2024-11-27 07:28:56.205510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.162 [2024-11-27 07:28:56.205524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.162 qpair failed and we were unable to recover it. 00:33:45.163 [2024-11-27 07:28:56.215440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.163 [2024-11-27 07:28:56.215540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.163 [2024-11-27 07:28:56.215553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.163 [2024-11-27 07:28:56.215560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.163 [2024-11-27 07:28:56.215566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.163 [2024-11-27 07:28:56.215580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.163 qpair failed and we were unable to recover it. 00:33:45.163 [2024-11-27 07:28:56.225460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.163 [2024-11-27 07:28:56.225507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.163 [2024-11-27 07:28:56.225520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.163 [2024-11-27 07:28:56.225527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.163 [2024-11-27 07:28:56.225533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.163 [2024-11-27 07:28:56.225547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.163 qpair failed and we were unable to recover it. 00:33:45.163 [2024-11-27 07:28:56.235464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.163 [2024-11-27 07:28:56.235511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.163 [2024-11-27 07:28:56.235524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.163 [2024-11-27 07:28:56.235531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.163 [2024-11-27 07:28:56.235537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.163 [2024-11-27 07:28:56.235551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.163 qpair failed and we were unable to recover it. 00:33:45.163 [2024-11-27 07:28:56.245554] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.163 [2024-11-27 07:28:56.245608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.163 [2024-11-27 07:28:56.245622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.163 [2024-11-27 07:28:56.245630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.163 [2024-11-27 07:28:56.245636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.163 [2024-11-27 07:28:56.245651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.163 qpair failed and we were unable to recover it. 00:33:45.163 [2024-11-27 07:28:56.255563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.163 [2024-11-27 07:28:56.255609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.163 [2024-11-27 07:28:56.255621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.163 [2024-11-27 07:28:56.255628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.163 [2024-11-27 07:28:56.255635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.163 [2024-11-27 07:28:56.255648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.163 qpair failed and we were unable to recover it. 00:33:45.163 [2024-11-27 07:28:56.265551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.163 [2024-11-27 07:28:56.265601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.163 [2024-11-27 07:28:56.265614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.163 [2024-11-27 07:28:56.265621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.163 [2024-11-27 07:28:56.265627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.163 [2024-11-27 07:28:56.265641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.163 qpair failed and we were unable to recover it. 00:33:45.163 [2024-11-27 07:28:56.275572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.163 [2024-11-27 07:28:56.275620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.163 [2024-11-27 07:28:56.275633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.163 [2024-11-27 07:28:56.275640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.163 [2024-11-27 07:28:56.275647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.163 [2024-11-27 07:28:56.275660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.163 qpair failed and we were unable to recover it. 00:33:45.163 [2024-11-27 07:28:56.285664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.163 [2024-11-27 07:28:56.285716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.163 [2024-11-27 07:28:56.285733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.163 [2024-11-27 07:28:56.285740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.163 [2024-11-27 07:28:56.285746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.163 [2024-11-27 07:28:56.285760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.163 qpair failed and we were unable to recover it. 00:33:45.163 [2024-11-27 07:28:56.295636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.163 [2024-11-27 07:28:56.295681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.163 [2024-11-27 07:28:56.295694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.163 [2024-11-27 07:28:56.295701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.163 [2024-11-27 07:28:56.295707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.163 [2024-11-27 07:28:56.295721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.163 qpair failed and we were unable to recover it. 00:33:45.163 [2024-11-27 07:28:56.305682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.163 [2024-11-27 07:28:56.305726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.163 [2024-11-27 07:28:56.305741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.163 [2024-11-27 07:28:56.305748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.163 [2024-11-27 07:28:56.305754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.163 [2024-11-27 07:28:56.305768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.163 qpair failed and we were unable to recover it. 00:33:45.164 [2024-11-27 07:28:56.315707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.164 [2024-11-27 07:28:56.315763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.164 [2024-11-27 07:28:56.315776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.164 [2024-11-27 07:28:56.315783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.164 [2024-11-27 07:28:56.315789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.164 [2024-11-27 07:28:56.315804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.164 qpair failed and we were unable to recover it. 00:33:45.164 [2024-11-27 07:28:56.325737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.164 [2024-11-27 07:28:56.325799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.164 [2024-11-27 07:28:56.325813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.164 [2024-11-27 07:28:56.325820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.164 [2024-11-27 07:28:56.325830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.164 [2024-11-27 07:28:56.325844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.164 qpair failed and we were unable to recover it. 00:33:45.164 [2024-11-27 07:28:56.335772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.164 [2024-11-27 07:28:56.335869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.164 [2024-11-27 07:28:56.335882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.164 [2024-11-27 07:28:56.335889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.164 [2024-11-27 07:28:56.335895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.164 [2024-11-27 07:28:56.335909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.164 qpair failed and we were unable to recover it. 00:33:45.164 [2024-11-27 07:28:56.345784] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.164 [2024-11-27 07:28:56.345864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.164 [2024-11-27 07:28:56.345877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.164 [2024-11-27 07:28:56.345884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.164 [2024-11-27 07:28:56.345890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.164 [2024-11-27 07:28:56.345904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.164 qpair failed and we were unable to recover it. 00:33:45.164 [2024-11-27 07:28:56.355805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.164 [2024-11-27 07:28:56.355851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.164 [2024-11-27 07:28:56.355864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.164 [2024-11-27 07:28:56.355871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.164 [2024-11-27 07:28:56.355877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.164 [2024-11-27 07:28:56.355891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.164 qpair failed and we were unable to recover it. 00:33:45.426 [2024-11-27 07:28:56.365849] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.426 [2024-11-27 07:28:56.365908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.426 [2024-11-27 07:28:56.365921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.426 [2024-11-27 07:28:56.365928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.426 [2024-11-27 07:28:56.365934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.426 [2024-11-27 07:28:56.365949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.426 qpair failed and we were unable to recover it. 00:33:45.426 [2024-11-27 07:28:56.375887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.426 [2024-11-27 07:28:56.375962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.426 [2024-11-27 07:28:56.375975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.426 [2024-11-27 07:28:56.375982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.426 [2024-11-27 07:28:56.375989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.426 [2024-11-27 07:28:56.376003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.426 qpair failed and we were unable to recover it. 00:33:45.426 [2024-11-27 07:28:56.385845] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.426 [2024-11-27 07:28:56.385938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.426 [2024-11-27 07:28:56.385951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.426 [2024-11-27 07:28:56.385958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.426 [2024-11-27 07:28:56.385965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.426 [2024-11-27 07:28:56.385979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.426 qpair failed and we were unable to recover it. 00:33:45.426 [2024-11-27 07:28:56.395931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.426 [2024-11-27 07:28:56.395977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.426 [2024-11-27 07:28:56.395991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.426 [2024-11-27 07:28:56.395998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.426 [2024-11-27 07:28:56.396004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.426 [2024-11-27 07:28:56.396018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.426 qpair failed and we were unable to recover it. 00:33:45.426 [2024-11-27 07:28:56.405994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.426 [2024-11-27 07:28:56.406046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.426 [2024-11-27 07:28:56.406059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.426 [2024-11-27 07:28:56.406066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.426 [2024-11-27 07:28:56.406073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.426 [2024-11-27 07:28:56.406086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.426 qpair failed and we were unable to recover it. 00:33:45.426 [2024-11-27 07:28:56.415875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.426 [2024-11-27 07:28:56.415990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.426 [2024-11-27 07:28:56.416006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.426 [2024-11-27 07:28:56.416013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.426 [2024-11-27 07:28:56.416020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.426 [2024-11-27 07:28:56.416034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.426 qpair failed and we were unable to recover it. 00:33:45.426 [2024-11-27 07:28:56.425988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.426 [2024-11-27 07:28:56.426034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.426 [2024-11-27 07:28:56.426047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.426 [2024-11-27 07:28:56.426054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.426 [2024-11-27 07:28:56.426060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.426 [2024-11-27 07:28:56.426074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.426 qpair failed and we were unable to recover it. 00:33:45.426 [2024-11-27 07:28:56.436031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.426 [2024-11-27 07:28:56.436078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.426 [2024-11-27 07:28:56.436091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.426 [2024-11-27 07:28:56.436098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.427 [2024-11-27 07:28:56.436104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.427 [2024-11-27 07:28:56.436118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.427 qpair failed and we were unable to recover it. 00:33:45.427 [2024-11-27 07:28:56.446128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.427 [2024-11-27 07:28:56.446213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.427 [2024-11-27 07:28:56.446226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.427 [2024-11-27 07:28:56.446233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.427 [2024-11-27 07:28:56.446240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.427 [2024-11-27 07:28:56.446254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.427 qpair failed and we were unable to recover it. 00:33:45.427 [2024-11-27 07:28:56.456100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.427 [2024-11-27 07:28:56.456154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.427 [2024-11-27 07:28:56.456170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.427 [2024-11-27 07:28:56.456177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.427 [2024-11-27 07:28:56.456187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.427 [2024-11-27 07:28:56.456201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.427 qpair failed and we were unable to recover it. 00:33:45.427 [2024-11-27 07:28:56.466102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.427 [2024-11-27 07:28:56.466147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.427 [2024-11-27 07:28:56.466163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.427 [2024-11-27 07:28:56.466170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.427 [2024-11-27 07:28:56.466176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.427 [2024-11-27 07:28:56.466191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.427 qpair failed and we were unable to recover it. 00:33:45.427 [2024-11-27 07:28:56.476127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.427 [2024-11-27 07:28:56.476171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.427 [2024-11-27 07:28:56.476184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.427 [2024-11-27 07:28:56.476191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.427 [2024-11-27 07:28:56.476197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.427 [2024-11-27 07:28:56.476211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.427 qpair failed and we were unable to recover it. 00:33:45.427 [2024-11-27 07:28:56.486210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.427 [2024-11-27 07:28:56.486263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.427 [2024-11-27 07:28:56.486277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.427 [2024-11-27 07:28:56.486284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.427 [2024-11-27 07:28:56.486290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.427 [2024-11-27 07:28:56.486304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.427 qpair failed and we were unable to recover it. 00:33:45.427 [2024-11-27 07:28:56.496216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.427 [2024-11-27 07:28:56.496270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.427 [2024-11-27 07:28:56.496284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.427 [2024-11-27 07:28:56.496292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.427 [2024-11-27 07:28:56.496299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.427 [2024-11-27 07:28:56.496313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.427 qpair failed and we were unable to recover it. 00:33:45.427 [2024-11-27 07:28:56.506217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.427 [2024-11-27 07:28:56.506268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.427 [2024-11-27 07:28:56.506281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.427 [2024-11-27 07:28:56.506288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.427 [2024-11-27 07:28:56.506294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.427 [2024-11-27 07:28:56.506308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.427 qpair failed and we were unable to recover it. 00:33:45.427 [2024-11-27 07:28:56.516248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.427 [2024-11-27 07:28:56.516295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.427 [2024-11-27 07:28:56.516308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.427 [2024-11-27 07:28:56.516315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.427 [2024-11-27 07:28:56.516321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.427 [2024-11-27 07:28:56.516335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.427 qpair failed and we were unable to recover it. 00:33:45.427 [2024-11-27 07:28:56.526291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.427 [2024-11-27 07:28:56.526355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.427 [2024-11-27 07:28:56.526368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.427 [2024-11-27 07:28:56.526375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.427 [2024-11-27 07:28:56.526381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.427 [2024-11-27 07:28:56.526395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.427 qpair failed and we were unable to recover it. 00:33:45.427 [2024-11-27 07:28:56.536300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.427 [2024-11-27 07:28:56.536350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.427 [2024-11-27 07:28:56.536363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.427 [2024-11-27 07:28:56.536370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.427 [2024-11-27 07:28:56.536376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.427 [2024-11-27 07:28:56.536391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.427 qpair failed and we were unable to recover it. 00:33:45.427 [2024-11-27 07:28:56.546333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.427 [2024-11-27 07:28:56.546382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.427 [2024-11-27 07:28:56.546399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.428 [2024-11-27 07:28:56.546406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.428 [2024-11-27 07:28:56.546412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.428 [2024-11-27 07:28:56.546426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.428 qpair failed and we were unable to recover it. 00:33:45.428 [2024-11-27 07:28:56.556237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.428 [2024-11-27 07:28:56.556285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.428 [2024-11-27 07:28:56.556298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.428 [2024-11-27 07:28:56.556305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.428 [2024-11-27 07:28:56.556311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.428 [2024-11-27 07:28:56.556325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.428 qpair failed and we were unable to recover it. 00:33:45.428 [2024-11-27 07:28:56.566413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.428 [2024-11-27 07:28:56.566467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.428 [2024-11-27 07:28:56.566480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.428 [2024-11-27 07:28:56.566487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.428 [2024-11-27 07:28:56.566493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.428 [2024-11-27 07:28:56.566507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.428 qpair failed and we were unable to recover it. 00:33:45.428 [2024-11-27 07:28:56.576434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.428 [2024-11-27 07:28:56.576482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.428 [2024-11-27 07:28:56.576495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.428 [2024-11-27 07:28:56.576502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.428 [2024-11-27 07:28:56.576508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.428 [2024-11-27 07:28:56.576522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.428 qpair failed and we were unable to recover it. 00:33:45.428 [2024-11-27 07:28:56.586451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.428 [2024-11-27 07:28:56.586510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.428 [2024-11-27 07:28:56.586524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.428 [2024-11-27 07:28:56.586535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.428 [2024-11-27 07:28:56.586541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.428 [2024-11-27 07:28:56.586555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.428 qpair failed and we were unable to recover it. 00:33:45.428 [2024-11-27 07:28:56.596467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.428 [2024-11-27 07:28:56.596514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.428 [2024-11-27 07:28:56.596527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.428 [2024-11-27 07:28:56.596534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.428 [2024-11-27 07:28:56.596540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.428 [2024-11-27 07:28:56.596554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.428 qpair failed and we were unable to recover it. 00:33:45.428 [2024-11-27 07:28:56.606507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.428 [2024-11-27 07:28:56.606560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.428 [2024-11-27 07:28:56.606573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.428 [2024-11-27 07:28:56.606580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.428 [2024-11-27 07:28:56.606587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.428 [2024-11-27 07:28:56.606600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.428 qpair failed and we were unable to recover it. 00:33:45.428 [2024-11-27 07:28:56.616533] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.428 [2024-11-27 07:28:56.616586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.428 [2024-11-27 07:28:56.616599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.428 [2024-11-27 07:28:56.616606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.428 [2024-11-27 07:28:56.616613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.428 [2024-11-27 07:28:56.616626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.428 qpair failed and we were unable to recover it. 00:33:45.428 [2024-11-27 07:28:56.626510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.428 [2024-11-27 07:28:56.626556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.428 [2024-11-27 07:28:56.626569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.428 [2024-11-27 07:28:56.626576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.428 [2024-11-27 07:28:56.626582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.428 [2024-11-27 07:28:56.626599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.428 qpair failed and we were unable to recover it. 00:33:45.689 [2024-11-27 07:28:56.636584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.689 [2024-11-27 07:28:56.636632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.689 [2024-11-27 07:28:56.636645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.689 [2024-11-27 07:28:56.636652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.689 [2024-11-27 07:28:56.636658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.689 [2024-11-27 07:28:56.636672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.689 qpair failed and we were unable to recover it. 00:33:45.689 [2024-11-27 07:28:56.646644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.689 [2024-11-27 07:28:56.646700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.689 [2024-11-27 07:28:56.646713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.689 [2024-11-27 07:28:56.646720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.689 [2024-11-27 07:28:56.646727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.689 [2024-11-27 07:28:56.646740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.689 qpair failed and we were unable to recover it. 00:33:45.689 [2024-11-27 07:28:56.656625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.689 [2024-11-27 07:28:56.656675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.689 [2024-11-27 07:28:56.656688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.689 [2024-11-27 07:28:56.656695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.689 [2024-11-27 07:28:56.656702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.689 [2024-11-27 07:28:56.656715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.689 qpair failed and we were unable to recover it. 00:33:45.689 [2024-11-27 07:28:56.666646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.689 [2024-11-27 07:28:56.666694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.689 [2024-11-27 07:28:56.666707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.689 [2024-11-27 07:28:56.666714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.690 [2024-11-27 07:28:56.666721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.690 [2024-11-27 07:28:56.666735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.690 qpair failed and we were unable to recover it. 00:33:45.690 [2024-11-27 07:28:56.676683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.690 [2024-11-27 07:28:56.676734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.690 [2024-11-27 07:28:56.676747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.690 [2024-11-27 07:28:56.676754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.690 [2024-11-27 07:28:56.676761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.690 [2024-11-27 07:28:56.676775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.690 qpair failed and we were unable to recover it. 00:33:45.690 [2024-11-27 07:28:56.686760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.690 [2024-11-27 07:28:56.686813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.690 [2024-11-27 07:28:56.686826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.690 [2024-11-27 07:28:56.686833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.690 [2024-11-27 07:28:56.686840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.690 [2024-11-27 07:28:56.686854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.690 qpair failed and we were unable to recover it. 00:33:45.690 [2024-11-27 07:28:56.696736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.690 [2024-11-27 07:28:56.696805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.690 [2024-11-27 07:28:56.696819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.690 [2024-11-27 07:28:56.696825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.690 [2024-11-27 07:28:56.696832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.690 [2024-11-27 07:28:56.696846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.690 qpair failed and we were unable to recover it. 00:33:45.690 [2024-11-27 07:28:56.706779] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.690 [2024-11-27 07:28:56.706823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.690 [2024-11-27 07:28:56.706836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.690 [2024-11-27 07:28:56.706843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.690 [2024-11-27 07:28:56.706849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.690 [2024-11-27 07:28:56.706863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.690 qpair failed and we were unable to recover it. 00:33:45.690 [2024-11-27 07:28:56.716813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.690 [2024-11-27 07:28:56.716861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.690 [2024-11-27 07:28:56.716873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.690 [2024-11-27 07:28:56.716884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.690 [2024-11-27 07:28:56.716891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.690 [2024-11-27 07:28:56.716905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.690 qpair failed and we were unable to recover it. 00:33:45.690 [2024-11-27 07:28:56.726737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.690 [2024-11-27 07:28:56.726794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.690 [2024-11-27 07:28:56.726807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.690 [2024-11-27 07:28:56.726814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.690 [2024-11-27 07:28:56.726821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.690 [2024-11-27 07:28:56.726834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.690 qpair failed and we were unable to recover it. 00:33:45.690 [2024-11-27 07:28:56.736883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.690 [2024-11-27 07:28:56.736940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.690 [2024-11-27 07:28:56.736964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.690 [2024-11-27 07:28:56.736973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.690 [2024-11-27 07:28:56.736981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.690 [2024-11-27 07:28:56.737000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.690 qpair failed and we were unable to recover it. 00:33:45.690 [2024-11-27 07:28:56.746882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.690 [2024-11-27 07:28:56.746934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.690 [2024-11-27 07:28:56.746959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.690 [2024-11-27 07:28:56.746969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.690 [2024-11-27 07:28:56.746976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.690 [2024-11-27 07:28:56.746996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.690 qpair failed and we were unable to recover it. 00:33:45.690 [2024-11-27 07:28:56.756910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.690 [2024-11-27 07:28:56.756962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.690 [2024-11-27 07:28:56.756987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.690 [2024-11-27 07:28:56.756995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.690 [2024-11-27 07:28:56.757002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.690 [2024-11-27 07:28:56.757026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.690 qpair failed and we were unable to recover it. 00:33:45.690 [2024-11-27 07:28:56.766994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.690 [2024-11-27 07:28:56.767053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.690 [2024-11-27 07:28:56.767068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.690 [2024-11-27 07:28:56.767075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.690 [2024-11-27 07:28:56.767082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.690 [2024-11-27 07:28:56.767097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.690 qpair failed and we were unable to recover it. 00:33:45.690 [2024-11-27 07:28:56.776972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.690 [2024-11-27 07:28:56.777022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.690 [2024-11-27 07:28:56.777036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.690 [2024-11-27 07:28:56.777043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.690 [2024-11-27 07:28:56.777049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.690 [2024-11-27 07:28:56.777064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.690 qpair failed and we were unable to recover it. 00:33:45.690 [2024-11-27 07:28:56.787044] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.690 [2024-11-27 07:28:56.787101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.690 [2024-11-27 07:28:56.787115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.690 [2024-11-27 07:28:56.787122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.690 [2024-11-27 07:28:56.787128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.690 [2024-11-27 07:28:56.787143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.690 qpair failed and we were unable to recover it. 00:33:45.690 [2024-11-27 07:28:56.796988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.690 [2024-11-27 07:28:56.797037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.690 [2024-11-27 07:28:56.797050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.690 [2024-11-27 07:28:56.797057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.691 [2024-11-27 07:28:56.797064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.691 [2024-11-27 07:28:56.797078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.691 qpair failed and we were unable to recover it. 00:33:45.691 [2024-11-27 07:28:56.807097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.691 [2024-11-27 07:28:56.807149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.691 [2024-11-27 07:28:56.807167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.691 [2024-11-27 07:28:56.807174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.691 [2024-11-27 07:28:56.807180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.691 [2024-11-27 07:28:56.807195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.691 qpair failed and we were unable to recover it. 00:33:45.691 [2024-11-27 07:28:56.817078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.691 [2024-11-27 07:28:56.817130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.691 [2024-11-27 07:28:56.817143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.691 [2024-11-27 07:28:56.817150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.691 [2024-11-27 07:28:56.817156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.691 [2024-11-27 07:28:56.817174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.691 qpair failed and we were unable to recover it. 00:33:45.691 [2024-11-27 07:28:56.827099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.691 [2024-11-27 07:28:56.827149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.691 [2024-11-27 07:28:56.827166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.691 [2024-11-27 07:28:56.827174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.691 [2024-11-27 07:28:56.827180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.691 [2024-11-27 07:28:56.827194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.691 qpair failed and we were unable to recover it. 00:33:45.691 [2024-11-27 07:28:56.837128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.691 [2024-11-27 07:28:56.837178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.691 [2024-11-27 07:28:56.837192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.691 [2024-11-27 07:28:56.837199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.691 [2024-11-27 07:28:56.837205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.691 [2024-11-27 07:28:56.837219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.691 qpair failed and we were unable to recover it. 00:33:45.691 [2024-11-27 07:28:56.847201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.691 [2024-11-27 07:28:56.847285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.691 [2024-11-27 07:28:56.847303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.691 [2024-11-27 07:28:56.847310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.691 [2024-11-27 07:28:56.847317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.691 [2024-11-27 07:28:56.847332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.691 qpair failed and we were unable to recover it. 00:33:45.691 [2024-11-27 07:28:56.857169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.691 [2024-11-27 07:28:56.857222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.691 [2024-11-27 07:28:56.857235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.691 [2024-11-27 07:28:56.857242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.691 [2024-11-27 07:28:56.857248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.691 [2024-11-27 07:28:56.857263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.691 qpair failed and we were unable to recover it. 00:33:45.691 [2024-11-27 07:28:56.867214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.691 [2024-11-27 07:28:56.867263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.691 [2024-11-27 07:28:56.867277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.691 [2024-11-27 07:28:56.867284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.691 [2024-11-27 07:28:56.867290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.691 [2024-11-27 07:28:56.867304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.691 qpair failed and we were unable to recover it. 00:33:45.691 [2024-11-27 07:28:56.877198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.691 [2024-11-27 07:28:56.877245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.691 [2024-11-27 07:28:56.877258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.691 [2024-11-27 07:28:56.877265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.691 [2024-11-27 07:28:56.877271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.691 [2024-11-27 07:28:56.877286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.691 qpair failed and we were unable to recover it. 00:33:45.691 [2024-11-27 07:28:56.887319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.691 [2024-11-27 07:28:56.887393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.691 [2024-11-27 07:28:56.887406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.691 [2024-11-27 07:28:56.887412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.691 [2024-11-27 07:28:56.887423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.691 [2024-11-27 07:28:56.887437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.691 qpair failed and we were unable to recover it. 00:33:45.953 [2024-11-27 07:28:56.897313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.953 [2024-11-27 07:28:56.897360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.953 [2024-11-27 07:28:56.897373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.953 [2024-11-27 07:28:56.897381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.953 [2024-11-27 07:28:56.897387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.953 [2024-11-27 07:28:56.897401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.953 qpair failed and we were unable to recover it. 00:33:45.953 [2024-11-27 07:28:56.907209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.953 [2024-11-27 07:28:56.907268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.953 [2024-11-27 07:28:56.907281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.953 [2024-11-27 07:28:56.907288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.953 [2024-11-27 07:28:56.907294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.953 [2024-11-27 07:28:56.907308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.953 qpair failed and we were unable to recover it. 00:33:45.953 [2024-11-27 07:28:56.917364] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.953 [2024-11-27 07:28:56.917413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.953 [2024-11-27 07:28:56.917426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.953 [2024-11-27 07:28:56.917433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.953 [2024-11-27 07:28:56.917439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.953 [2024-11-27 07:28:56.917453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.953 qpair failed and we were unable to recover it. 00:33:45.953 [2024-11-27 07:28:56.927407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.953 [2024-11-27 07:28:56.927464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.953 [2024-11-27 07:28:56.927477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.953 [2024-11-27 07:28:56.927484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.953 [2024-11-27 07:28:56.927491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.953 [2024-11-27 07:28:56.927504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.953 qpair failed and we were unable to recover it. 00:33:45.953 [2024-11-27 07:28:56.937448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.953 [2024-11-27 07:28:56.937542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.953 [2024-11-27 07:28:56.937555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.953 [2024-11-27 07:28:56.937562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.953 [2024-11-27 07:28:56.937568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.953 [2024-11-27 07:28:56.937582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.953 qpair failed and we were unable to recover it. 00:33:45.953 [2024-11-27 07:28:56.947423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.953 [2024-11-27 07:28:56.947473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.953 [2024-11-27 07:28:56.947486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.953 [2024-11-27 07:28:56.947493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.953 [2024-11-27 07:28:56.947499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.953 [2024-11-27 07:28:56.947513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.953 qpair failed and we were unable to recover it. 00:33:45.953 [2024-11-27 07:28:56.957454] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.953 [2024-11-27 07:28:56.957497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.953 [2024-11-27 07:28:56.957510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.953 [2024-11-27 07:28:56.957517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.953 [2024-11-27 07:28:56.957524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.953 [2024-11-27 07:28:56.957537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.953 qpair failed and we were unable to recover it. 00:33:45.953 [2024-11-27 07:28:56.967522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.953 [2024-11-27 07:28:56.967575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.953 [2024-11-27 07:28:56.967588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.953 [2024-11-27 07:28:56.967595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.953 [2024-11-27 07:28:56.967602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.953 [2024-11-27 07:28:56.967615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.953 qpair failed and we were unable to recover it. 00:33:45.953 [2024-11-27 07:28:56.977519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.953 [2024-11-27 07:28:56.977567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.953 [2024-11-27 07:28:56.977583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.953 [2024-11-27 07:28:56.977590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.953 [2024-11-27 07:28:56.977597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.954 [2024-11-27 07:28:56.977611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.954 qpair failed and we were unable to recover it. 00:33:45.954 [2024-11-27 07:28:56.987543] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.954 [2024-11-27 07:28:56.987596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.954 [2024-11-27 07:28:56.987610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.954 [2024-11-27 07:28:56.987617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.954 [2024-11-27 07:28:56.987624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.954 [2024-11-27 07:28:56.987638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.954 qpair failed and we were unable to recover it. 00:33:45.954 [2024-11-27 07:28:56.997561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.954 [2024-11-27 07:28:56.997608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.954 [2024-11-27 07:28:56.997621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.954 [2024-11-27 07:28:56.997628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.954 [2024-11-27 07:28:56.997634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.954 [2024-11-27 07:28:56.997648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.954 qpair failed and we were unable to recover it. 00:33:45.954 [2024-11-27 07:28:57.007640] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.954 [2024-11-27 07:28:57.007695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.954 [2024-11-27 07:28:57.007708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.954 [2024-11-27 07:28:57.007715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.954 [2024-11-27 07:28:57.007722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.954 [2024-11-27 07:28:57.007736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.954 qpair failed and we were unable to recover it. 00:33:45.954 [2024-11-27 07:28:57.017631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.954 [2024-11-27 07:28:57.017684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.954 [2024-11-27 07:28:57.017698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.954 [2024-11-27 07:28:57.017705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.954 [2024-11-27 07:28:57.017714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.954 [2024-11-27 07:28:57.017729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.954 qpair failed and we were unable to recover it. 00:33:45.954 [2024-11-27 07:28:57.027646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.954 [2024-11-27 07:28:57.027695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.954 [2024-11-27 07:28:57.027708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.954 [2024-11-27 07:28:57.027715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.954 [2024-11-27 07:28:57.027721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.954 [2024-11-27 07:28:57.027735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.954 qpair failed and we were unable to recover it. 00:33:45.954 [2024-11-27 07:28:57.037700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.954 [2024-11-27 07:28:57.037780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.954 [2024-11-27 07:28:57.037793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.954 [2024-11-27 07:28:57.037800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.954 [2024-11-27 07:28:57.037806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.954 [2024-11-27 07:28:57.037821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.954 qpair failed and we were unable to recover it. 00:33:45.954 [2024-11-27 07:28:57.047731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.954 [2024-11-27 07:28:57.047786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.954 [2024-11-27 07:28:57.047799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.954 [2024-11-27 07:28:57.047806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.954 [2024-11-27 07:28:57.047813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.954 [2024-11-27 07:28:57.047827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.954 qpair failed and we were unable to recover it. 00:33:45.954 [2024-11-27 07:28:57.057737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.954 [2024-11-27 07:28:57.057788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.954 [2024-11-27 07:28:57.057802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.954 [2024-11-27 07:28:57.057809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.954 [2024-11-27 07:28:57.057816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.954 [2024-11-27 07:28:57.057830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.954 qpair failed and we were unable to recover it. 00:33:45.954 [2024-11-27 07:28:57.067758] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.954 [2024-11-27 07:28:57.067821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.954 [2024-11-27 07:28:57.067834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.954 [2024-11-27 07:28:57.067842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.954 [2024-11-27 07:28:57.067848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.954 [2024-11-27 07:28:57.067862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.954 qpair failed and we were unable to recover it. 00:33:45.954 [2024-11-27 07:28:57.077790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.954 [2024-11-27 07:28:57.077835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.954 [2024-11-27 07:28:57.077848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.954 [2024-11-27 07:28:57.077855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.954 [2024-11-27 07:28:57.077862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.954 [2024-11-27 07:28:57.077876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.954 qpair failed and we were unable to recover it. 00:33:45.954 [2024-11-27 07:28:57.087854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.954 [2024-11-27 07:28:57.087909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.954 [2024-11-27 07:28:57.087932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.954 [2024-11-27 07:28:57.087939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.954 [2024-11-27 07:28:57.087946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.954 [2024-11-27 07:28:57.087964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.954 qpair failed and we were unable to recover it. 00:33:45.954 [2024-11-27 07:28:57.097853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.954 [2024-11-27 07:28:57.097909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.954 [2024-11-27 07:28:57.097934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.954 [2024-11-27 07:28:57.097943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.954 [2024-11-27 07:28:57.097950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.954 [2024-11-27 07:28:57.097969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.954 qpair failed and we were unable to recover it. 00:33:45.954 [2024-11-27 07:28:57.107876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.954 [2024-11-27 07:28:57.107975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.954 [2024-11-27 07:28:57.108006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.954 [2024-11-27 07:28:57.108017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.954 [2024-11-27 07:28:57.108025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.954 [2024-11-27 07:28:57.108046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.954 qpair failed and we were unable to recover it. 00:33:45.955 [2024-11-27 07:28:57.117860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.955 [2024-11-27 07:28:57.117912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.955 [2024-11-27 07:28:57.117926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.955 [2024-11-27 07:28:57.117934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.955 [2024-11-27 07:28:57.117940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.955 [2024-11-27 07:28:57.117955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.955 qpair failed and we were unable to recover it. 00:33:45.955 [2024-11-27 07:28:57.127817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.955 [2024-11-27 07:28:57.127871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.955 [2024-11-27 07:28:57.127885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.955 [2024-11-27 07:28:57.127893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.955 [2024-11-27 07:28:57.127899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.955 [2024-11-27 07:28:57.127913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.955 qpair failed and we were unable to recover it. 00:33:45.955 [2024-11-27 07:28:57.137941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.955 [2024-11-27 07:28:57.137990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.955 [2024-11-27 07:28:57.138003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.955 [2024-11-27 07:28:57.138011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.955 [2024-11-27 07:28:57.138017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.955 [2024-11-27 07:28:57.138032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.955 qpair failed and we were unable to recover it. 00:33:45.955 [2024-11-27 07:28:57.147956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.955 [2024-11-27 07:28:57.148008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.955 [2024-11-27 07:28:57.148022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.955 [2024-11-27 07:28:57.148033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.955 [2024-11-27 07:28:57.148041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:45.955 [2024-11-27 07:28:57.148056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:45.955 qpair failed and we were unable to recover it. 00:33:46.218 [2024-11-27 07:28:57.157985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.218 [2024-11-27 07:28:57.158031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.218 [2024-11-27 07:28:57.158044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.218 [2024-11-27 07:28:57.158051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.218 [2024-11-27 07:28:57.158057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.218 [2024-11-27 07:28:57.158071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.218 qpair failed and we were unable to recover it. 00:33:46.218 [2024-11-27 07:28:57.168015] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.218 [2024-11-27 07:28:57.168069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.218 [2024-11-27 07:28:57.168082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.218 [2024-11-27 07:28:57.168089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.218 [2024-11-27 07:28:57.168095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.218 [2024-11-27 07:28:57.168109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.218 qpair failed and we were unable to recover it. 00:33:46.218 [2024-11-27 07:28:57.178060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.218 [2024-11-27 07:28:57.178110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.218 [2024-11-27 07:28:57.178123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.218 [2024-11-27 07:28:57.178130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.218 [2024-11-27 07:28:57.178137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.218 [2024-11-27 07:28:57.178151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.218 qpair failed and we were unable to recover it. 00:33:46.218 [2024-11-27 07:28:57.188066] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.218 [2024-11-27 07:28:57.188110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.218 [2024-11-27 07:28:57.188123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.218 [2024-11-27 07:28:57.188130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.218 [2024-11-27 07:28:57.188137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.218 [2024-11-27 07:28:57.188155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.218 qpair failed and we were unable to recover it. 00:33:46.218 [2024-11-27 07:28:57.198099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.218 [2024-11-27 07:28:57.198147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.218 [2024-11-27 07:28:57.198164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.218 [2024-11-27 07:28:57.198172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.218 [2024-11-27 07:28:57.198178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.218 [2024-11-27 07:28:57.198193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.218 qpair failed and we were unable to recover it. 00:33:46.218 [2024-11-27 07:28:57.208188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.218 [2024-11-27 07:28:57.208244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.218 [2024-11-27 07:28:57.208257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.218 [2024-11-27 07:28:57.208264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.218 [2024-11-27 07:28:57.208270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.218 [2024-11-27 07:28:57.208284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.218 qpair failed and we were unable to recover it. 00:33:46.218 [2024-11-27 07:28:57.218142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.218 [2024-11-27 07:28:57.218196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.218 [2024-11-27 07:28:57.218209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.218 [2024-11-27 07:28:57.218216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.218 [2024-11-27 07:28:57.218222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.218 [2024-11-27 07:28:57.218236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.218 qpair failed and we were unable to recover it. 00:33:46.218 [2024-11-27 07:28:57.228192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.218 [2024-11-27 07:28:57.228242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.218 [2024-11-27 07:28:57.228255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.218 [2024-11-27 07:28:57.228262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.218 [2024-11-27 07:28:57.228269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.218 [2024-11-27 07:28:57.228283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.218 qpair failed and we were unable to recover it. 00:33:46.218 [2024-11-27 07:28:57.238237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.218 [2024-11-27 07:28:57.238316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.218 [2024-11-27 07:28:57.238330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.218 [2024-11-27 07:28:57.238337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.218 [2024-11-27 07:28:57.238343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.218 [2024-11-27 07:28:57.238357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.218 qpair failed and we were unable to recover it. 00:33:46.218 [2024-11-27 07:28:57.248142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.218 [2024-11-27 07:28:57.248204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.218 [2024-11-27 07:28:57.248218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.218 [2024-11-27 07:28:57.248225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.218 [2024-11-27 07:28:57.248231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.218 [2024-11-27 07:28:57.248245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.218 qpair failed and we were unable to recover it. 00:33:46.218 [2024-11-27 07:28:57.258281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.218 [2024-11-27 07:28:57.258362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.218 [2024-11-27 07:28:57.258375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.218 [2024-11-27 07:28:57.258382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.218 [2024-11-27 07:28:57.258388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.218 [2024-11-27 07:28:57.258403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.218 qpair failed and we were unable to recover it. 00:33:46.218 [2024-11-27 07:28:57.268276] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.218 [2024-11-27 07:28:57.268328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.218 [2024-11-27 07:28:57.268341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.218 [2024-11-27 07:28:57.268348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.218 [2024-11-27 07:28:57.268354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.218 [2024-11-27 07:28:57.268368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.219 qpair failed and we were unable to recover it. 00:33:46.219 [2024-11-27 07:28:57.278298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.219 [2024-11-27 07:28:57.278349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.219 [2024-11-27 07:28:57.278363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.219 [2024-11-27 07:28:57.278373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.219 [2024-11-27 07:28:57.278380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.219 [2024-11-27 07:28:57.278394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.219 qpair failed and we were unable to recover it. 00:33:46.219 [2024-11-27 07:28:57.288352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.219 [2024-11-27 07:28:57.288406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.219 [2024-11-27 07:28:57.288419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.219 [2024-11-27 07:28:57.288426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.219 [2024-11-27 07:28:57.288432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.219 [2024-11-27 07:28:57.288446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.219 qpair failed and we were unable to recover it. 00:33:46.219 [2024-11-27 07:28:57.298368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.219 [2024-11-27 07:28:57.298420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.219 [2024-11-27 07:28:57.298433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.219 [2024-11-27 07:28:57.298441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.219 [2024-11-27 07:28:57.298447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.219 [2024-11-27 07:28:57.298460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.219 qpair failed and we were unable to recover it. 00:33:46.219 [2024-11-27 07:28:57.308395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.219 [2024-11-27 07:28:57.308440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.219 [2024-11-27 07:28:57.308454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.219 [2024-11-27 07:28:57.308461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.219 [2024-11-27 07:28:57.308467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.219 [2024-11-27 07:28:57.308481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.219 qpair failed and we were unable to recover it. 00:33:46.219 [2024-11-27 07:28:57.318411] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.219 [2024-11-27 07:28:57.318457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.219 [2024-11-27 07:28:57.318471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.219 [2024-11-27 07:28:57.318478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.219 [2024-11-27 07:28:57.318484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.219 [2024-11-27 07:28:57.318501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.219 qpair failed and we were unable to recover it. 00:33:46.219 [2024-11-27 07:28:57.328492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.219 [2024-11-27 07:28:57.328547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.219 [2024-11-27 07:28:57.328559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.219 [2024-11-27 07:28:57.328566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.219 [2024-11-27 07:28:57.328573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.219 [2024-11-27 07:28:57.328587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.219 qpair failed and we were unable to recover it. 00:33:46.219 [2024-11-27 07:28:57.338378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.219 [2024-11-27 07:28:57.338434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.219 [2024-11-27 07:28:57.338447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.219 [2024-11-27 07:28:57.338454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.219 [2024-11-27 07:28:57.338461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.219 [2024-11-27 07:28:57.338475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.219 qpair failed and we were unable to recover it. 00:33:46.219 [2024-11-27 07:28:57.348512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.219 [2024-11-27 07:28:57.348563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.219 [2024-11-27 07:28:57.348576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.219 [2024-11-27 07:28:57.348583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.219 [2024-11-27 07:28:57.348590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.219 [2024-11-27 07:28:57.348604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.219 qpair failed and we were unable to recover it. 00:33:46.219 [2024-11-27 07:28:57.358477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.219 [2024-11-27 07:28:57.358527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.219 [2024-11-27 07:28:57.358540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.219 [2024-11-27 07:28:57.358547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.219 [2024-11-27 07:28:57.358554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.219 [2024-11-27 07:28:57.358568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.219 qpair failed and we were unable to recover it. 00:33:46.219 [2024-11-27 07:28:57.368582] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.219 [2024-11-27 07:28:57.368636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.219 [2024-11-27 07:28:57.368649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.219 [2024-11-27 07:28:57.368656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.219 [2024-11-27 07:28:57.368662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.219 [2024-11-27 07:28:57.368676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.219 qpair failed and we were unable to recover it. 00:33:46.219 [2024-11-27 07:28:57.378613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.219 [2024-11-27 07:28:57.378668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.219 [2024-11-27 07:28:57.378682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.219 [2024-11-27 07:28:57.378689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.219 [2024-11-27 07:28:57.378695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.219 [2024-11-27 07:28:57.378709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.219 qpair failed and we were unable to recover it. 00:33:46.219 [2024-11-27 07:28:57.388574] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.219 [2024-11-27 07:28:57.388624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.219 [2024-11-27 07:28:57.388639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.219 [2024-11-27 07:28:57.388646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.219 [2024-11-27 07:28:57.388652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.219 [2024-11-27 07:28:57.388667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.219 qpair failed and we were unable to recover it. 00:33:46.219 [2024-11-27 07:28:57.398616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.219 [2024-11-27 07:28:57.398666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.219 [2024-11-27 07:28:57.398679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.219 [2024-11-27 07:28:57.398687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.219 [2024-11-27 07:28:57.398693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.219 [2024-11-27 07:28:57.398707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.219 qpair failed and we were unable to recover it. 00:33:46.220 [2024-11-27 07:28:57.408689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.220 [2024-11-27 07:28:57.408744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.220 [2024-11-27 07:28:57.408762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.220 [2024-11-27 07:28:57.408769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.220 [2024-11-27 07:28:57.408775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.220 [2024-11-27 07:28:57.408794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.220 qpair failed and we were unable to recover it. 00:33:46.220 [2024-11-27 07:28:57.418693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.220 [2024-11-27 07:28:57.418744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.220 [2024-11-27 07:28:57.418758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.220 [2024-11-27 07:28:57.418765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.220 [2024-11-27 07:28:57.418771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.220 [2024-11-27 07:28:57.418785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.220 qpair failed and we were unable to recover it. 00:33:46.482 [2024-11-27 07:28:57.428700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.482 [2024-11-27 07:28:57.428744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.482 [2024-11-27 07:28:57.428758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.482 [2024-11-27 07:28:57.428765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.482 [2024-11-27 07:28:57.428771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.482 [2024-11-27 07:28:57.428785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.482 qpair failed and we were unable to recover it. 00:33:46.482 [2024-11-27 07:28:57.438690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.482 [2024-11-27 07:28:57.438754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.482 [2024-11-27 07:28:57.438768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.482 [2024-11-27 07:28:57.438775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.482 [2024-11-27 07:28:57.438781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.482 [2024-11-27 07:28:57.438795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.482 qpair failed and we were unable to recover it. 00:33:46.482 [2024-11-27 07:28:57.448812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.482 [2024-11-27 07:28:57.448865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.482 [2024-11-27 07:28:57.448878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.482 [2024-11-27 07:28:57.448885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.482 [2024-11-27 07:28:57.448895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.482 [2024-11-27 07:28:57.448909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.482 qpair failed and we were unable to recover it. 00:33:46.482 [2024-11-27 07:28:57.458772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.482 [2024-11-27 07:28:57.458825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.482 [2024-11-27 07:28:57.458849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.482 [2024-11-27 07:28:57.458858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.482 [2024-11-27 07:28:57.458865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.482 [2024-11-27 07:28:57.458884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.482 qpair failed and we were unable to recover it. 00:33:46.482 [2024-11-27 07:28:57.468811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.482 [2024-11-27 07:28:57.468864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.482 [2024-11-27 07:28:57.468888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.482 [2024-11-27 07:28:57.468897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.482 [2024-11-27 07:28:57.468904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.482 [2024-11-27 07:28:57.468923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.482 qpair failed and we were unable to recover it. 00:33:46.482 [2024-11-27 07:28:57.478800] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.482 [2024-11-27 07:28:57.478856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.482 [2024-11-27 07:28:57.478880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.482 [2024-11-27 07:28:57.478889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.482 [2024-11-27 07:28:57.478896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.482 [2024-11-27 07:28:57.478916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.482 qpair failed and we were unable to recover it. 00:33:46.482 [2024-11-27 07:28:57.488871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.482 [2024-11-27 07:28:57.488932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.482 [2024-11-27 07:28:57.488957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.482 [2024-11-27 07:28:57.488966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.482 [2024-11-27 07:28:57.488973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.482 [2024-11-27 07:28:57.488992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.482 qpair failed and we were unable to recover it. 00:33:46.482 [2024-11-27 07:28:57.498779] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.482 [2024-11-27 07:28:57.498839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.482 [2024-11-27 07:28:57.498854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.482 [2024-11-27 07:28:57.498861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.482 [2024-11-27 07:28:57.498868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.482 [2024-11-27 07:28:57.498885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.482 qpair failed and we were unable to recover it. 00:33:46.482 [2024-11-27 07:28:57.508914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.482 [2024-11-27 07:28:57.508963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.482 [2024-11-27 07:28:57.508976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.482 [2024-11-27 07:28:57.508983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.482 [2024-11-27 07:28:57.508990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.482 [2024-11-27 07:28:57.509004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.482 qpair failed and we were unable to recover it. 00:33:46.482 [2024-11-27 07:28:57.518949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.482 [2024-11-27 07:28:57.519003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.482 [2024-11-27 07:28:57.519028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.482 [2024-11-27 07:28:57.519036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.482 [2024-11-27 07:28:57.519044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.482 [2024-11-27 07:28:57.519063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.482 qpair failed and we were unable to recover it. 00:33:46.483 [2024-11-27 07:28:57.529006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.483 [2024-11-27 07:28:57.529063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.483 [2024-11-27 07:28:57.529077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.483 [2024-11-27 07:28:57.529085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.483 [2024-11-27 07:28:57.529091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.483 [2024-11-27 07:28:57.529106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.483 qpair failed and we were unable to recover it. 00:33:46.483 [2024-11-27 07:28:57.539014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.483 [2024-11-27 07:28:57.539069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.483 [2024-11-27 07:28:57.539087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.483 [2024-11-27 07:28:57.539095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.483 [2024-11-27 07:28:57.539101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.483 [2024-11-27 07:28:57.539116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.483 qpair failed and we were unable to recover it. 00:33:46.483 [2024-11-27 07:28:57.549000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.483 [2024-11-27 07:28:57.549050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.483 [2024-11-27 07:28:57.549064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.483 [2024-11-27 07:28:57.549071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.483 [2024-11-27 07:28:57.549077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.483 [2024-11-27 07:28:57.549092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.483 qpair failed and we were unable to recover it. 00:33:46.483 [2024-11-27 07:28:57.559044] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.483 [2024-11-27 07:28:57.559091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.483 [2024-11-27 07:28:57.559104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.483 [2024-11-27 07:28:57.559111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.483 [2024-11-27 07:28:57.559118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.483 [2024-11-27 07:28:57.559133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.483 qpair failed and we were unable to recover it. 00:33:46.483 [2024-11-27 07:28:57.569123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.483 [2024-11-27 07:28:57.569177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.483 [2024-11-27 07:28:57.569191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.483 [2024-11-27 07:28:57.569198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.483 [2024-11-27 07:28:57.569204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.483 [2024-11-27 07:28:57.569218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.483 qpair failed and we were unable to recover it. 00:33:46.483 [2024-11-27 07:28:57.579122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.483 [2024-11-27 07:28:57.579171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.483 [2024-11-27 07:28:57.579185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.483 [2024-11-27 07:28:57.579192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.483 [2024-11-27 07:28:57.579203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.483 [2024-11-27 07:28:57.579217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.483 qpair failed and we were unable to recover it. 00:33:46.483 [2024-11-27 07:28:57.589137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.483 [2024-11-27 07:28:57.589228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.483 [2024-11-27 07:28:57.589241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.483 [2024-11-27 07:28:57.589248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.483 [2024-11-27 07:28:57.589255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.483 [2024-11-27 07:28:57.589269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.483 qpair failed and we were unable to recover it. 00:33:46.483 [2024-11-27 07:28:57.599198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.483 [2024-11-27 07:28:57.599250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.483 [2024-11-27 07:28:57.599263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.483 [2024-11-27 07:28:57.599271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.483 [2024-11-27 07:28:57.599277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.483 [2024-11-27 07:28:57.599291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.483 qpair failed and we were unable to recover it. 00:33:46.483 [2024-11-27 07:28:57.609209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.483 [2024-11-27 07:28:57.609288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.483 [2024-11-27 07:28:57.609301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.483 [2024-11-27 07:28:57.609308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.483 [2024-11-27 07:28:57.609315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.483 [2024-11-27 07:28:57.609329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.483 qpair failed and we were unable to recover it. 00:33:46.483 [2024-11-27 07:28:57.619232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.483 [2024-11-27 07:28:57.619281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.483 [2024-11-27 07:28:57.619294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.483 [2024-11-27 07:28:57.619301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.483 [2024-11-27 07:28:57.619308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.483 [2024-11-27 07:28:57.619322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.483 qpair failed and we were unable to recover it. 00:33:46.483 [2024-11-27 07:28:57.629232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.483 [2024-11-27 07:28:57.629284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.483 [2024-11-27 07:28:57.629297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.483 [2024-11-27 07:28:57.629304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.483 [2024-11-27 07:28:57.629310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.483 [2024-11-27 07:28:57.629325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.483 qpair failed and we were unable to recover it. 00:33:46.483 [2024-11-27 07:28:57.639279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.483 [2024-11-27 07:28:57.639335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.483 [2024-11-27 07:28:57.639348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.483 [2024-11-27 07:28:57.639355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.483 [2024-11-27 07:28:57.639361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.483 [2024-11-27 07:28:57.639375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.483 qpair failed and we were unable to recover it. 00:33:46.483 [2024-11-27 07:28:57.649348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.483 [2024-11-27 07:28:57.649411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.484 [2024-11-27 07:28:57.649425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.484 [2024-11-27 07:28:57.649432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.484 [2024-11-27 07:28:57.649439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.484 [2024-11-27 07:28:57.649453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.484 qpair failed and we were unable to recover it. 00:33:46.484 [2024-11-27 07:28:57.659317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.484 [2024-11-27 07:28:57.659368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.484 [2024-11-27 07:28:57.659384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.484 [2024-11-27 07:28:57.659391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.484 [2024-11-27 07:28:57.659397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.484 [2024-11-27 07:28:57.659412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.484 qpair failed and we were unable to recover it. 00:33:46.484 [2024-11-27 07:28:57.669323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.484 [2024-11-27 07:28:57.669376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.484 [2024-11-27 07:28:57.669393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.484 [2024-11-27 07:28:57.669401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.484 [2024-11-27 07:28:57.669407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.484 [2024-11-27 07:28:57.669421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.484 qpair failed and we were unable to recover it. 00:33:46.484 [2024-11-27 07:28:57.679357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.484 [2024-11-27 07:28:57.679403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.484 [2024-11-27 07:28:57.679417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.484 [2024-11-27 07:28:57.679424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.484 [2024-11-27 07:28:57.679431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.484 [2024-11-27 07:28:57.679445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.484 qpair failed and we were unable to recover it. 00:33:46.746 [2024-11-27 07:28:57.689458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.746 [2024-11-27 07:28:57.689511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.746 [2024-11-27 07:28:57.689524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.746 [2024-11-27 07:28:57.689531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.746 [2024-11-27 07:28:57.689538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.746 [2024-11-27 07:28:57.689552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.746 qpair failed and we were unable to recover it. 00:33:46.746 [2024-11-27 07:28:57.699441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.746 [2024-11-27 07:28:57.699491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.746 [2024-11-27 07:28:57.699505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.746 [2024-11-27 07:28:57.699512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.746 [2024-11-27 07:28:57.699518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.746 [2024-11-27 07:28:57.699532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.746 qpair failed and we were unable to recover it. 00:33:46.746 [2024-11-27 07:28:57.709443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.746 [2024-11-27 07:28:57.709492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.746 [2024-11-27 07:28:57.709506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.746 [2024-11-27 07:28:57.709516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.746 [2024-11-27 07:28:57.709523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.747 [2024-11-27 07:28:57.709537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.747 qpair failed and we were unable to recover it. 00:33:46.747 [2024-11-27 07:28:57.719494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.747 [2024-11-27 07:28:57.719548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.747 [2024-11-27 07:28:57.719561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.747 [2024-11-27 07:28:57.719568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.747 [2024-11-27 07:28:57.719575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.747 [2024-11-27 07:28:57.719589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.747 qpair failed and we were unable to recover it. 00:33:46.747 [2024-11-27 07:28:57.729558] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.747 [2024-11-27 07:28:57.729611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.747 [2024-11-27 07:28:57.729624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.747 [2024-11-27 07:28:57.729631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.747 [2024-11-27 07:28:57.729637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.747 [2024-11-27 07:28:57.729651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.747 qpair failed and we were unable to recover it. 00:33:46.747 [2024-11-27 07:28:57.739553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.747 [2024-11-27 07:28:57.739606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.747 [2024-11-27 07:28:57.739619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.747 [2024-11-27 07:28:57.739627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.747 [2024-11-27 07:28:57.739633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.747 [2024-11-27 07:28:57.739647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.747 qpair failed and we were unable to recover it. 00:33:46.747 [2024-11-27 07:28:57.749558] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.747 [2024-11-27 07:28:57.749605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.747 [2024-11-27 07:28:57.749618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.747 [2024-11-27 07:28:57.749626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.747 [2024-11-27 07:28:57.749633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.747 [2024-11-27 07:28:57.749650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.747 qpair failed and we were unable to recover it. 00:33:46.747 [2024-11-27 07:28:57.759602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.747 [2024-11-27 07:28:57.759650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.747 [2024-11-27 07:28:57.759663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.747 [2024-11-27 07:28:57.759670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.747 [2024-11-27 07:28:57.759676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.747 [2024-11-27 07:28:57.759690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.747 qpair failed and we were unable to recover it. 00:33:46.747 [2024-11-27 07:28:57.769647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.747 [2024-11-27 07:28:57.769702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.747 [2024-11-27 07:28:57.769715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.747 [2024-11-27 07:28:57.769722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.747 [2024-11-27 07:28:57.769729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.747 [2024-11-27 07:28:57.769743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.747 qpair failed and we were unable to recover it. 00:33:46.747 [2024-11-27 07:28:57.779643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.747 [2024-11-27 07:28:57.779693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.747 [2024-11-27 07:28:57.779707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.747 [2024-11-27 07:28:57.779714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.747 [2024-11-27 07:28:57.779720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.747 [2024-11-27 07:28:57.779734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.747 qpair failed and we were unable to recover it. 00:33:46.747 [2024-11-27 07:28:57.789685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.747 [2024-11-27 07:28:57.789730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.747 [2024-11-27 07:28:57.789744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.747 [2024-11-27 07:28:57.789750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.747 [2024-11-27 07:28:57.789757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.747 [2024-11-27 07:28:57.789771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.747 qpair failed and we were unable to recover it. 00:33:46.747 [2024-11-27 07:28:57.799681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.747 [2024-11-27 07:28:57.799735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.747 [2024-11-27 07:28:57.799750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.747 [2024-11-27 07:28:57.799757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.747 [2024-11-27 07:28:57.799763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.747 [2024-11-27 07:28:57.799778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.747 qpair failed and we were unable to recover it. 00:33:46.747 [2024-11-27 07:28:57.809770] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.747 [2024-11-27 07:28:57.809825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.747 [2024-11-27 07:28:57.809838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.747 [2024-11-27 07:28:57.809845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.747 [2024-11-27 07:28:57.809852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.747 [2024-11-27 07:28:57.809867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.747 qpair failed and we were unable to recover it. 00:33:46.747 [2024-11-27 07:28:57.819667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.747 [2024-11-27 07:28:57.819726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.747 [2024-11-27 07:28:57.819740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.747 [2024-11-27 07:28:57.819747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.747 [2024-11-27 07:28:57.819754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.747 [2024-11-27 07:28:57.819767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.747 qpair failed and we were unable to recover it. 00:33:46.747 [2024-11-27 07:28:57.829809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.747 [2024-11-27 07:28:57.829860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.747 [2024-11-27 07:28:57.829873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.747 [2024-11-27 07:28:57.829880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.747 [2024-11-27 07:28:57.829886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.747 [2024-11-27 07:28:57.829900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.747 qpair failed and we were unable to recover it. 00:33:46.747 [2024-11-27 07:28:57.839818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.747 [2024-11-27 07:28:57.839863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.747 [2024-11-27 07:28:57.839876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.747 [2024-11-27 07:28:57.839886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.748 [2024-11-27 07:28:57.839892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.748 [2024-11-27 07:28:57.839906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.748 qpair failed and we were unable to recover it. 00:33:46.748 [2024-11-27 07:28:57.849762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.748 [2024-11-27 07:28:57.849817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.748 [2024-11-27 07:28:57.849830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.748 [2024-11-27 07:28:57.849837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.748 [2024-11-27 07:28:57.849844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.748 [2024-11-27 07:28:57.849858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.748 qpair failed and we were unable to recover it. 00:33:46.748 [2024-11-27 07:28:57.859848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.748 [2024-11-27 07:28:57.859899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.748 [2024-11-27 07:28:57.859912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.748 [2024-11-27 07:28:57.859919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.748 [2024-11-27 07:28:57.859925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.748 [2024-11-27 07:28:57.859939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.748 qpair failed and we were unable to recover it. 00:33:46.748 [2024-11-27 07:28:57.869909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.748 [2024-11-27 07:28:57.869955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.748 [2024-11-27 07:28:57.869968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.748 [2024-11-27 07:28:57.869976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.748 [2024-11-27 07:28:57.869982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.748 [2024-11-27 07:28:57.869996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.748 qpair failed and we were unable to recover it. 00:33:46.748 [2024-11-27 07:28:57.879926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.748 [2024-11-27 07:28:57.879972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.748 [2024-11-27 07:28:57.879985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.748 [2024-11-27 07:28:57.879992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.748 [2024-11-27 07:28:57.879999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.748 [2024-11-27 07:28:57.880016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.748 qpair failed and we were unable to recover it. 00:33:46.748 [2024-11-27 07:28:57.889979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.748 [2024-11-27 07:28:57.890035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.748 [2024-11-27 07:28:57.890048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.748 [2024-11-27 07:28:57.890055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.748 [2024-11-27 07:28:57.890061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.748 [2024-11-27 07:28:57.890075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.748 qpair failed and we were unable to recover it. 00:33:46.748 [2024-11-27 07:28:57.899993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.748 [2024-11-27 07:28:57.900041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.748 [2024-11-27 07:28:57.900055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.748 [2024-11-27 07:28:57.900062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.748 [2024-11-27 07:28:57.900068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.748 [2024-11-27 07:28:57.900082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.748 qpair failed and we were unable to recover it. 00:33:46.748 [2024-11-27 07:28:57.910020] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.748 [2024-11-27 07:28:57.910094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.748 [2024-11-27 07:28:57.910107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.748 [2024-11-27 07:28:57.910114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.748 [2024-11-27 07:28:57.910120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.748 [2024-11-27 07:28:57.910134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.748 qpair failed and we were unable to recover it. 00:33:46.748 [2024-11-27 07:28:57.920036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.748 [2024-11-27 07:28:57.920078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.748 [2024-11-27 07:28:57.920091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.748 [2024-11-27 07:28:57.920098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.748 [2024-11-27 07:28:57.920104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.748 [2024-11-27 07:28:57.920118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.748 qpair failed and we were unable to recover it. 00:33:46.748 [2024-11-27 07:28:57.930012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.748 [2024-11-27 07:28:57.930108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.748 [2024-11-27 07:28:57.930121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.748 [2024-11-27 07:28:57.930128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.748 [2024-11-27 07:28:57.930134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.748 [2024-11-27 07:28:57.930148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.748 qpair failed and we were unable to recover it. 00:33:46.748 [2024-11-27 07:28:57.940107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.748 [2024-11-27 07:28:57.940165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.748 [2024-11-27 07:28:57.940178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.748 [2024-11-27 07:28:57.940185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.748 [2024-11-27 07:28:57.940192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:46.748 [2024-11-27 07:28:57.940206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.748 qpair failed and we were unable to recover it. 00:33:47.009 [2024-11-27 07:28:57.950078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.009 [2024-11-27 07:28:57.950126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.009 [2024-11-27 07:28:57.950139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.009 [2024-11-27 07:28:57.950146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.009 [2024-11-27 07:28:57.950153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.009 [2024-11-27 07:28:57.950170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.009 qpair failed and we were unable to recover it. 00:33:47.009 [2024-11-27 07:28:57.960123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.009 [2024-11-27 07:28:57.960172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.009 [2024-11-27 07:28:57.960186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.009 [2024-11-27 07:28:57.960193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.009 [2024-11-27 07:28:57.960199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.009 [2024-11-27 07:28:57.960214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.009 qpair failed and we were unable to recover it. 00:33:47.009 [2024-11-27 07:28:57.970216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.009 [2024-11-27 07:28:57.970272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.009 [2024-11-27 07:28:57.970288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.009 [2024-11-27 07:28:57.970295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.009 [2024-11-27 07:28:57.970301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.009 [2024-11-27 07:28:57.970315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.009 qpair failed and we were unable to recover it. 00:33:47.010 [2024-11-27 07:28:57.980214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.010 [2024-11-27 07:28:57.980263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.010 [2024-11-27 07:28:57.980277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.010 [2024-11-27 07:28:57.980284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.010 [2024-11-27 07:28:57.980290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.010 [2024-11-27 07:28:57.980304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.010 qpair failed and we were unable to recover it. 00:33:47.010 [2024-11-27 07:28:57.990227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.010 [2024-11-27 07:28:57.990275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.010 [2024-11-27 07:28:57.990288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.010 [2024-11-27 07:28:57.990295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.010 [2024-11-27 07:28:57.990302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.010 [2024-11-27 07:28:57.990316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.010 qpair failed and we were unable to recover it. 00:33:47.010 [2024-11-27 07:28:58.000257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.010 [2024-11-27 07:28:58.000304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.010 [2024-11-27 07:28:58.000318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.010 [2024-11-27 07:28:58.000326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.010 [2024-11-27 07:28:58.000333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.010 [2024-11-27 07:28:58.000349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.010 qpair failed and we were unable to recover it. 00:33:47.010 [2024-11-27 07:28:58.010314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.010 [2024-11-27 07:28:58.010369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.010 [2024-11-27 07:28:58.010382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.010 [2024-11-27 07:28:58.010389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.010 [2024-11-27 07:28:58.010399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.010 [2024-11-27 07:28:58.010413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.010 qpair failed and we were unable to recover it. 00:33:47.010 [2024-11-27 07:28:58.020329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.010 [2024-11-27 07:28:58.020429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.010 [2024-11-27 07:28:58.020442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.010 [2024-11-27 07:28:58.020449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.010 [2024-11-27 07:28:58.020456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.010 [2024-11-27 07:28:58.020470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.010 qpair failed and we were unable to recover it. 00:33:47.010 [2024-11-27 07:28:58.030338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.010 [2024-11-27 07:28:58.030388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.010 [2024-11-27 07:28:58.030401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.010 [2024-11-27 07:28:58.030408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.010 [2024-11-27 07:28:58.030415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.010 [2024-11-27 07:28:58.030428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.010 qpair failed and we were unable to recover it. 00:33:47.010 [2024-11-27 07:28:58.040361] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.010 [2024-11-27 07:28:58.040410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.010 [2024-11-27 07:28:58.040424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.010 [2024-11-27 07:28:58.040431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.010 [2024-11-27 07:28:58.040437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.010 [2024-11-27 07:28:58.040450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.010 qpair failed and we were unable to recover it. 00:33:47.010 [2024-11-27 07:28:58.050447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.010 [2024-11-27 07:28:58.050499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.010 [2024-11-27 07:28:58.050512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.010 [2024-11-27 07:28:58.050519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.010 [2024-11-27 07:28:58.050525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.010 [2024-11-27 07:28:58.050539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.010 qpair failed and we were unable to recover it. 00:33:47.010 [2024-11-27 07:28:58.060454] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.010 [2024-11-27 07:28:58.060502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.010 [2024-11-27 07:28:58.060515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.010 [2024-11-27 07:28:58.060522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.010 [2024-11-27 07:28:58.060528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.010 [2024-11-27 07:28:58.060542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.010 qpair failed and we were unable to recover it. 00:33:47.010 [2024-11-27 07:28:58.070422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.010 [2024-11-27 07:28:58.070472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.010 [2024-11-27 07:28:58.070486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.010 [2024-11-27 07:28:58.070493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.010 [2024-11-27 07:28:58.070500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.010 [2024-11-27 07:28:58.070514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.010 qpair failed and we were unable to recover it. 00:33:47.010 [2024-11-27 07:28:58.080488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.010 [2024-11-27 07:28:58.080588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.010 [2024-11-27 07:28:58.080601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.010 [2024-11-27 07:28:58.080608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.010 [2024-11-27 07:28:58.080615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.010 [2024-11-27 07:28:58.080629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.010 qpair failed and we were unable to recover it. 00:33:47.010 [2024-11-27 07:28:58.090550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.010 [2024-11-27 07:28:58.090603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.010 [2024-11-27 07:28:58.090616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.010 [2024-11-27 07:28:58.090623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.010 [2024-11-27 07:28:58.090630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.010 [2024-11-27 07:28:58.090644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.010 qpair failed and we were unable to recover it. 00:33:47.010 [2024-11-27 07:28:58.100465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.010 [2024-11-27 07:28:58.100520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.010 [2024-11-27 07:28:58.100537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.010 [2024-11-27 07:28:58.100544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.010 [2024-11-27 07:28:58.100550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.010 [2024-11-27 07:28:58.100564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.011 qpair failed and we were unable to recover it. 00:33:47.011 [2024-11-27 07:28:58.110512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.011 [2024-11-27 07:28:58.110559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.011 [2024-11-27 07:28:58.110572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.011 [2024-11-27 07:28:58.110579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.011 [2024-11-27 07:28:58.110585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.011 [2024-11-27 07:28:58.110599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.011 qpair failed and we were unable to recover it. 00:33:47.011 [2024-11-27 07:28:58.120556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.011 [2024-11-27 07:28:58.120603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.011 [2024-11-27 07:28:58.120616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.011 [2024-11-27 07:28:58.120623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.011 [2024-11-27 07:28:58.120629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.011 [2024-11-27 07:28:58.120643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.011 qpair failed and we were unable to recover it. 00:33:47.011 [2024-11-27 07:28:58.130660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.011 [2024-11-27 07:28:58.130716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.011 [2024-11-27 07:28:58.130729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.011 [2024-11-27 07:28:58.130736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.011 [2024-11-27 07:28:58.130742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.011 [2024-11-27 07:28:58.130756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.011 qpair failed and we were unable to recover it. 00:33:47.011 [2024-11-27 07:28:58.140627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.011 [2024-11-27 07:28:58.140676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.011 [2024-11-27 07:28:58.140689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.011 [2024-11-27 07:28:58.140696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.011 [2024-11-27 07:28:58.140706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.011 [2024-11-27 07:28:58.140720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.011 qpair failed and we were unable to recover it. 00:33:47.011 [2024-11-27 07:28:58.150651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.011 [2024-11-27 07:28:58.150704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.011 [2024-11-27 07:28:58.150717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.011 [2024-11-27 07:28:58.150725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.011 [2024-11-27 07:28:58.150731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.011 [2024-11-27 07:28:58.150745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.011 qpair failed and we were unable to recover it. 00:33:47.011 [2024-11-27 07:28:58.160642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.011 [2024-11-27 07:28:58.160689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.011 [2024-11-27 07:28:58.160702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.011 [2024-11-27 07:28:58.160710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.011 [2024-11-27 07:28:58.160716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.011 [2024-11-27 07:28:58.160730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.011 qpair failed and we were unable to recover it. 00:33:47.011 [2024-11-27 07:28:58.170759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.011 [2024-11-27 07:28:58.170809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.011 [2024-11-27 07:28:58.170822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.011 [2024-11-27 07:28:58.170829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.011 [2024-11-27 07:28:58.170835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.011 [2024-11-27 07:28:58.170849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.011 qpair failed and we were unable to recover it. 00:33:47.011 [2024-11-27 07:28:58.180753] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.011 [2024-11-27 07:28:58.180808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.011 [2024-11-27 07:28:58.180821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.011 [2024-11-27 07:28:58.180828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.011 [2024-11-27 07:28:58.180834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.011 [2024-11-27 07:28:58.180848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.011 qpair failed and we were unable to recover it. 00:33:47.011 [2024-11-27 07:28:58.190774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.011 [2024-11-27 07:28:58.190825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.011 [2024-11-27 07:28:58.190839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.011 [2024-11-27 07:28:58.190845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.011 [2024-11-27 07:28:58.190852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.011 [2024-11-27 07:28:58.190866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.011 qpair failed and we were unable to recover it. 00:33:47.011 [2024-11-27 07:28:58.200805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.011 [2024-11-27 07:28:58.200855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.011 [2024-11-27 07:28:58.200868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.011 [2024-11-27 07:28:58.200875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.011 [2024-11-27 07:28:58.200881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.011 [2024-11-27 07:28:58.200895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.011 qpair failed and we were unable to recover it. 00:33:47.011 [2024-11-27 07:28:58.210819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.011 [2024-11-27 07:28:58.210864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.011 [2024-11-27 07:28:58.210877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.011 [2024-11-27 07:28:58.210884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.011 [2024-11-27 07:28:58.210890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.011 [2024-11-27 07:28:58.210904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.011 qpair failed and we were unable to recover it. 00:33:47.272 [2024-11-27 07:28:58.220845] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.272 [2024-11-27 07:28:58.220893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.272 [2024-11-27 07:28:58.220907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.272 [2024-11-27 07:28:58.220914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.272 [2024-11-27 07:28:58.220920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.272 [2024-11-27 07:28:58.220934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.272 qpair failed and we were unable to recover it. 00:33:47.272 [2024-11-27 07:28:58.230869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.272 [2024-11-27 07:28:58.230913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.272 [2024-11-27 07:28:58.230927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.272 [2024-11-27 07:28:58.230934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.272 [2024-11-27 07:28:58.230940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.272 [2024-11-27 07:28:58.230954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.272 qpair failed and we were unable to recover it. 00:33:47.272 [2024-11-27 07:28:58.240884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.272 [2024-11-27 07:28:58.240927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.272 [2024-11-27 07:28:58.240940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.272 [2024-11-27 07:28:58.240947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.272 [2024-11-27 07:28:58.240954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.272 [2024-11-27 07:28:58.240968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.272 qpair failed and we were unable to recover it. 00:33:47.272 [2024-11-27 07:28:58.250912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.272 [2024-11-27 07:28:58.250960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.272 [2024-11-27 07:28:58.250974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.272 [2024-11-27 07:28:58.250981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.272 [2024-11-27 07:28:58.250987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.272 [2024-11-27 07:28:58.251000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.272 qpair failed and we were unable to recover it. 00:33:47.272 [2024-11-27 07:28:58.260961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.272 [2024-11-27 07:28:58.261011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.272 [2024-11-27 07:28:58.261024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.272 [2024-11-27 07:28:58.261031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.272 [2024-11-27 07:28:58.261037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.272 [2024-11-27 07:28:58.261051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.272 qpair failed and we were unable to recover it. 00:33:47.272 [2024-11-27 07:28:58.270968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.272 [2024-11-27 07:28:58.271013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.272 [2024-11-27 07:28:58.271027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.272 [2024-11-27 07:28:58.271037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.272 [2024-11-27 07:28:58.271043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.272 [2024-11-27 07:28:58.271057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.272 qpair failed and we were unable to recover it. 00:33:47.272 [2024-11-27 07:28:58.280988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.272 [2024-11-27 07:28:58.281029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.272 [2024-11-27 07:28:58.281042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.272 [2024-11-27 07:28:58.281049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.273 [2024-11-27 07:28:58.281056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.273 [2024-11-27 07:28:58.281070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.273 qpair failed and we were unable to recover it. 00:33:47.273 [2024-11-27 07:28:58.291016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.273 [2024-11-27 07:28:58.291064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.273 [2024-11-27 07:28:58.291077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.273 [2024-11-27 07:28:58.291084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.273 [2024-11-27 07:28:58.291091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.273 [2024-11-27 07:28:58.291104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.273 qpair failed and we were unable to recover it. 00:33:47.273 [2024-11-27 07:28:58.301052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.273 [2024-11-27 07:28:58.301104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.273 [2024-11-27 07:28:58.301117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.273 [2024-11-27 07:28:58.301124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.273 [2024-11-27 07:28:58.301131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.273 [2024-11-27 07:28:58.301145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.273 qpair failed and we were unable to recover it. 00:33:47.273 [2024-11-27 07:28:58.311080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.273 [2024-11-27 07:28:58.311130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.273 [2024-11-27 07:28:58.311143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.273 [2024-11-27 07:28:58.311150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.273 [2024-11-27 07:28:58.311156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.273 [2024-11-27 07:28:58.311178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.273 qpair failed and we were unable to recover it. 00:33:47.273 [2024-11-27 07:28:58.321103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.273 [2024-11-27 07:28:58.321150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.273 [2024-11-27 07:28:58.321167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.273 [2024-11-27 07:28:58.321174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.273 [2024-11-27 07:28:58.321180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.273 [2024-11-27 07:28:58.321194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.273 qpair failed and we were unable to recover it. 00:33:47.273 [2024-11-27 07:28:58.331148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.273 [2024-11-27 07:28:58.331198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.273 [2024-11-27 07:28:58.331212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.273 [2024-11-27 07:28:58.331219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.273 [2024-11-27 07:28:58.331225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.273 [2024-11-27 07:28:58.331240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.273 qpair failed and we were unable to recover it. 00:33:47.273 [2024-11-27 07:28:58.341052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.273 [2024-11-27 07:28:58.341097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.273 [2024-11-27 07:28:58.341110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.273 [2024-11-27 07:28:58.341117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.273 [2024-11-27 07:28:58.341124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.273 [2024-11-27 07:28:58.341138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.273 qpair failed and we were unable to recover it. 00:33:47.273 [2024-11-27 07:28:58.351143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.273 [2024-11-27 07:28:58.351190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.273 [2024-11-27 07:28:58.351203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.273 [2024-11-27 07:28:58.351210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.273 [2024-11-27 07:28:58.351217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.273 [2024-11-27 07:28:58.351231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.273 qpair failed and we were unable to recover it. 00:33:47.273 [2024-11-27 07:28:58.361252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.273 [2024-11-27 07:28:58.361298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.273 [2024-11-27 07:28:58.361311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.273 [2024-11-27 07:28:58.361318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.273 [2024-11-27 07:28:58.361324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.273 [2024-11-27 07:28:58.361338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.273 qpair failed and we were unable to recover it. 00:33:47.273 [2024-11-27 07:28:58.371231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.273 [2024-11-27 07:28:58.371279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.273 [2024-11-27 07:28:58.371292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.273 [2024-11-27 07:28:58.371299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.273 [2024-11-27 07:28:58.371306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.273 [2024-11-27 07:28:58.371320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.273 qpair failed and we were unable to recover it. 00:33:47.273 [2024-11-27 07:28:58.381249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.273 [2024-11-27 07:28:58.381300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.273 [2024-11-27 07:28:58.381313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.274 [2024-11-27 07:28:58.381320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.274 [2024-11-27 07:28:58.381326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.274 [2024-11-27 07:28:58.381341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.274 qpair failed and we were unable to recover it. 00:33:47.274 [2024-11-27 07:28:58.391302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.274 [2024-11-27 07:28:58.391348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.274 [2024-11-27 07:28:58.391361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.274 [2024-11-27 07:28:58.391368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.274 [2024-11-27 07:28:58.391374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.274 [2024-11-27 07:28:58.391388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.274 qpair failed and we were unable to recover it. 00:33:47.274 [2024-11-27 07:28:58.401321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.274 [2024-11-27 07:28:58.401368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.274 [2024-11-27 07:28:58.401382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.274 [2024-11-27 07:28:58.401392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.274 [2024-11-27 07:28:58.401398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.274 [2024-11-27 07:28:58.401413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.274 qpair failed and we were unable to recover it. 00:33:47.274 [2024-11-27 07:28:58.411360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.274 [2024-11-27 07:28:58.411404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.274 [2024-11-27 07:28:58.411417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.274 [2024-11-27 07:28:58.411424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.274 [2024-11-27 07:28:58.411430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.274 [2024-11-27 07:28:58.411444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.274 qpair failed and we were unable to recover it. 00:33:47.274 [2024-11-27 07:28:58.421398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.274 [2024-11-27 07:28:58.421442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.274 [2024-11-27 07:28:58.421455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.274 [2024-11-27 07:28:58.421462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.274 [2024-11-27 07:28:58.421468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.274 [2024-11-27 07:28:58.421482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.274 qpair failed and we were unable to recover it. 00:33:47.274 [2024-11-27 07:28:58.431267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.274 [2024-11-27 07:28:58.431309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.274 [2024-11-27 07:28:58.431322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.274 [2024-11-27 07:28:58.431329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.274 [2024-11-27 07:28:58.431335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.274 [2024-11-27 07:28:58.431349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.274 qpair failed and we were unable to recover it. 00:33:47.274 [2024-11-27 07:28:58.441440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.274 [2024-11-27 07:28:58.441485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.274 [2024-11-27 07:28:58.441498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.274 [2024-11-27 07:28:58.441505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.274 [2024-11-27 07:28:58.441511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.274 [2024-11-27 07:28:58.441529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.274 qpair failed and we were unable to recover it. 00:33:47.274 [2024-11-27 07:28:58.451476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.274 [2024-11-27 07:28:58.451519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.274 [2024-11-27 07:28:58.451531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.274 [2024-11-27 07:28:58.451538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.274 [2024-11-27 07:28:58.451544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.274 [2024-11-27 07:28:58.451559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.274 qpair failed and we were unable to recover it. 00:33:47.274 [2024-11-27 07:28:58.461465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.274 [2024-11-27 07:28:58.461511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.274 [2024-11-27 07:28:58.461524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.274 [2024-11-27 07:28:58.461531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.274 [2024-11-27 07:28:58.461538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.274 [2024-11-27 07:28:58.461552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.274 qpair failed and we were unable to recover it. 00:33:47.274 [2024-11-27 07:28:58.471481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.274 [2024-11-27 07:28:58.471556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.274 [2024-11-27 07:28:58.471569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.274 [2024-11-27 07:28:58.471575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.274 [2024-11-27 07:28:58.471582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.274 [2024-11-27 07:28:58.471596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.274 qpair failed and we were unable to recover it. 00:33:47.536 [2024-11-27 07:28:58.481536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.536 [2024-11-27 07:28:58.481578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.536 [2024-11-27 07:28:58.481591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.536 [2024-11-27 07:28:58.481598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.536 [2024-11-27 07:28:58.481605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.536 [2024-11-27 07:28:58.481619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.536 qpair failed and we were unable to recover it. 00:33:47.536 [2024-11-27 07:28:58.491560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.536 [2024-11-27 07:28:58.491655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.536 [2024-11-27 07:28:58.491668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.536 [2024-11-27 07:28:58.491675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.536 [2024-11-27 07:28:58.491682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.536 [2024-11-27 07:28:58.491695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.536 qpair failed and we were unable to recover it. 00:33:47.536 [2024-11-27 07:28:58.501625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.536 [2024-11-27 07:28:58.501672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.536 [2024-11-27 07:28:58.501685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.536 [2024-11-27 07:28:58.501692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.536 [2024-11-27 07:28:58.501699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.536 [2024-11-27 07:28:58.501712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.536 qpair failed and we were unable to recover it. 00:33:47.536 [2024-11-27 07:28:58.511625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.536 [2024-11-27 07:28:58.511666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.536 [2024-11-27 07:28:58.511679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.536 [2024-11-27 07:28:58.511686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.536 [2024-11-27 07:28:58.511692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.536 [2024-11-27 07:28:58.511706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.536 qpair failed and we were unable to recover it. 00:33:47.536 [2024-11-27 07:28:58.521630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.536 [2024-11-27 07:28:58.521677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.536 [2024-11-27 07:28:58.521689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.536 [2024-11-27 07:28:58.521696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.536 [2024-11-27 07:28:58.521702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.536 [2024-11-27 07:28:58.521716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.536 qpair failed and we were unable to recover it. 00:33:47.536 [2024-11-27 07:28:58.531651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.536 [2024-11-27 07:28:58.531698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.536 [2024-11-27 07:28:58.531714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.536 [2024-11-27 07:28:58.531721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.536 [2024-11-27 07:28:58.531727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.536 [2024-11-27 07:28:58.531741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.536 qpair failed and we were unable to recover it. 00:33:47.536 [2024-11-27 07:28:58.541708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.536 [2024-11-27 07:28:58.541756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.536 [2024-11-27 07:28:58.541769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.536 [2024-11-27 07:28:58.541776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.536 [2024-11-27 07:28:58.541783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.536 [2024-11-27 07:28:58.541796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.536 qpair failed and we were unable to recover it. 00:33:47.536 [2024-11-27 07:28:58.551718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.536 [2024-11-27 07:28:58.551760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.536 [2024-11-27 07:28:58.551773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.536 [2024-11-27 07:28:58.551780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.536 [2024-11-27 07:28:58.551786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.536 [2024-11-27 07:28:58.551800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.536 qpair failed and we were unable to recover it. 00:33:47.536 [2024-11-27 07:28:58.561713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.536 [2024-11-27 07:28:58.561757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.536 [2024-11-27 07:28:58.561770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.536 [2024-11-27 07:28:58.561777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.536 [2024-11-27 07:28:58.561783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.536 [2024-11-27 07:28:58.561797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.536 qpair failed and we were unable to recover it. 00:33:47.536 [2024-11-27 07:28:58.571767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.536 [2024-11-27 07:28:58.571809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.537 [2024-11-27 07:28:58.571822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.537 [2024-11-27 07:28:58.571830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.537 [2024-11-27 07:28:58.571839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.537 [2024-11-27 07:28:58.571854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.537 qpair failed and we were unable to recover it. 00:33:47.537 [2024-11-27 07:28:58.581813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.537 [2024-11-27 07:28:58.581858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.537 [2024-11-27 07:28:58.581872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.537 [2024-11-27 07:28:58.581879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.537 [2024-11-27 07:28:58.581885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.537 [2024-11-27 07:28:58.581899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.537 qpair failed and we were unable to recover it. 00:33:47.537 [2024-11-27 07:28:58.591824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.537 [2024-11-27 07:28:58.591878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.537 [2024-11-27 07:28:58.591902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.537 [2024-11-27 07:28:58.591911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.537 [2024-11-27 07:28:58.591918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.537 [2024-11-27 07:28:58.591937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.537 qpair failed and we were unable to recover it. 00:33:47.537 [2024-11-27 07:28:58.601857] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.537 [2024-11-27 07:28:58.601898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.537 [2024-11-27 07:28:58.601917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.537 [2024-11-27 07:28:58.601923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.537 [2024-11-27 07:28:58.601927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.537 [2024-11-27 07:28:58.601941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.537 qpair failed and we were unable to recover it. 00:33:47.537 [2024-11-27 07:28:58.611845] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.537 [2024-11-27 07:28:58.611891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.537 [2024-11-27 07:28:58.611909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.537 [2024-11-27 07:28:58.611915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.537 [2024-11-27 07:28:58.611920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.537 [2024-11-27 07:28:58.611934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.537 qpair failed and we were unable to recover it. 00:33:47.537 [2024-11-27 07:28:58.621926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.537 [2024-11-27 07:28:58.621971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.537 [2024-11-27 07:28:58.621990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.537 [2024-11-27 07:28:58.621996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.537 [2024-11-27 07:28:58.622001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.537 [2024-11-27 07:28:58.622015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.537 qpair failed and we were unable to recover it. 00:33:47.537 [2024-11-27 07:28:58.631942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.537 [2024-11-27 07:28:58.631984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.537 [2024-11-27 07:28:58.632002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.537 [2024-11-27 07:28:58.632008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.537 [2024-11-27 07:28:58.632014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.537 [2024-11-27 07:28:58.632027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.537 qpair failed and we were unable to recover it. 00:33:47.537 [2024-11-27 07:28:58.641977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.537 [2024-11-27 07:28:58.642020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.537 [2024-11-27 07:28:58.642031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.537 [2024-11-27 07:28:58.642036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.537 [2024-11-27 07:28:58.642041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.537 [2024-11-27 07:28:58.642052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.537 qpair failed and we were unable to recover it. 00:33:47.537 [2024-11-27 07:28:58.651999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.537 [2024-11-27 07:28:58.652037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.537 [2024-11-27 07:28:58.652048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.537 [2024-11-27 07:28:58.652053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.537 [2024-11-27 07:28:58.652057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.537 [2024-11-27 07:28:58.652068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.537 qpair failed and we were unable to recover it. 00:33:47.537 [2024-11-27 07:28:58.662046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.537 [2024-11-27 07:28:58.662087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.537 [2024-11-27 07:28:58.662100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.537 [2024-11-27 07:28:58.662105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.537 [2024-11-27 07:28:58.662109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.537 [2024-11-27 07:28:58.662120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.537 qpair failed and we were unable to recover it. 00:33:47.537 [2024-11-27 07:28:58.672051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.537 [2024-11-27 07:28:58.672136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.537 [2024-11-27 07:28:58.672145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.537 [2024-11-27 07:28:58.672150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.537 [2024-11-27 07:28:58.672154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.537 [2024-11-27 07:28:58.672168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.538 qpair failed and we were unable to recover it. 00:33:47.538 [2024-11-27 07:28:58.682076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.538 [2024-11-27 07:28:58.682113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.538 [2024-11-27 07:28:58.682123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.538 [2024-11-27 07:28:58.682128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.538 [2024-11-27 07:28:58.682133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.538 [2024-11-27 07:28:58.682143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.538 qpair failed and we were unable to recover it. 00:33:47.538 [2024-11-27 07:28:58.692087] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.538 [2024-11-27 07:28:58.692126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.538 [2024-11-27 07:28:58.692136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.538 [2024-11-27 07:28:58.692140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.538 [2024-11-27 07:28:58.692145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.538 [2024-11-27 07:28:58.692155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.538 qpair failed and we were unable to recover it. 00:33:47.538 [2024-11-27 07:28:58.702133] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.538 [2024-11-27 07:28:58.702225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.538 [2024-11-27 07:28:58.702234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.538 [2024-11-27 07:28:58.702239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.538 [2024-11-27 07:28:58.702247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.538 [2024-11-27 07:28:58.702257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.538 qpair failed and we were unable to recover it. 00:33:47.538 [2024-11-27 07:28:58.712181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.538 [2024-11-27 07:28:58.712217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.538 [2024-11-27 07:28:58.712226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.538 [2024-11-27 07:28:58.712231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.538 [2024-11-27 07:28:58.712235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.538 [2024-11-27 07:28:58.712245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.538 qpair failed and we were unable to recover it. 00:33:47.538 [2024-11-27 07:28:58.722182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.538 [2024-11-27 07:28:58.722220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.538 [2024-11-27 07:28:58.722230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.538 [2024-11-27 07:28:58.722235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.538 [2024-11-27 07:28:58.722239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.538 [2024-11-27 07:28:58.722249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.538 qpair failed and we were unable to recover it. 00:33:47.538 [2024-11-27 07:28:58.732214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.538 [2024-11-27 07:28:58.732254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.538 [2024-11-27 07:28:58.732265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.538 [2024-11-27 07:28:58.732270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.538 [2024-11-27 07:28:58.732274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.538 [2024-11-27 07:28:58.732285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.538 qpair failed and we were unable to recover it. 00:33:47.805 [2024-11-27 07:28:58.742264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.805 [2024-11-27 07:28:58.742308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.805 [2024-11-27 07:28:58.742318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.805 [2024-11-27 07:28:58.742323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.805 [2024-11-27 07:28:58.742327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.805 [2024-11-27 07:28:58.742338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.805 qpair failed and we were unable to recover it. 00:33:47.805 [2024-11-27 07:28:58.752264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.805 [2024-11-27 07:28:58.752303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.805 [2024-11-27 07:28:58.752313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.805 [2024-11-27 07:28:58.752319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.805 [2024-11-27 07:28:58.752324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.805 [2024-11-27 07:28:58.752335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.805 qpair failed and we were unable to recover it. 00:33:47.805 [2024-11-27 07:28:58.762283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.805 [2024-11-27 07:28:58.762325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.805 [2024-11-27 07:28:58.762334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.805 [2024-11-27 07:28:58.762339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.805 [2024-11-27 07:28:58.762344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.805 [2024-11-27 07:28:58.762354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.805 qpair failed and we were unable to recover it. 00:33:47.805 [2024-11-27 07:28:58.772341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.805 [2024-11-27 07:28:58.772379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.805 [2024-11-27 07:28:58.772389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.805 [2024-11-27 07:28:58.772394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.805 [2024-11-27 07:28:58.772398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.805 [2024-11-27 07:28:58.772409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.805 qpair failed and we were unable to recover it. 00:33:47.805 [2024-11-27 07:28:58.782376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.805 [2024-11-27 07:28:58.782415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.805 [2024-11-27 07:28:58.782425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.805 [2024-11-27 07:28:58.782430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.805 [2024-11-27 07:28:58.782434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.805 [2024-11-27 07:28:58.782444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.805 qpair failed and we were unable to recover it. 00:33:47.805 [2024-11-27 07:28:58.792246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.805 [2024-11-27 07:28:58.792290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.805 [2024-11-27 07:28:58.792300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.805 [2024-11-27 07:28:58.792305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.805 [2024-11-27 07:28:58.792309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.805 [2024-11-27 07:28:58.792319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.805 qpair failed and we were unable to recover it. 00:33:47.805 [2024-11-27 07:28:58.802387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.805 [2024-11-27 07:28:58.802424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.805 [2024-11-27 07:28:58.802433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.805 [2024-11-27 07:28:58.802438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.805 [2024-11-27 07:28:58.802443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.805 [2024-11-27 07:28:58.802453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.805 qpair failed and we were unable to recover it. 00:33:47.805 [2024-11-27 07:28:58.812434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.805 [2024-11-27 07:28:58.812473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.805 [2024-11-27 07:28:58.812482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.805 [2024-11-27 07:28:58.812487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.805 [2024-11-27 07:28:58.812491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.805 [2024-11-27 07:28:58.812501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.805 qpair failed and we were unable to recover it. 00:33:47.805 [2024-11-27 07:28:58.822498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.805 [2024-11-27 07:28:58.822540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.805 [2024-11-27 07:28:58.822550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.805 [2024-11-27 07:28:58.822555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.805 [2024-11-27 07:28:58.822559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.805 [2024-11-27 07:28:58.822569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.805 qpair failed and we were unable to recover it. 00:33:47.805 [2024-11-27 07:28:58.832460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.805 [2024-11-27 07:28:58.832494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.805 [2024-11-27 07:28:58.832504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.805 [2024-11-27 07:28:58.832512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.805 [2024-11-27 07:28:58.832516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.805 [2024-11-27 07:28:58.832526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.805 qpair failed and we were unable to recover it. 00:33:47.805 [2024-11-27 07:28:58.842512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.805 [2024-11-27 07:28:58.842550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.805 [2024-11-27 07:28:58.842559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.805 [2024-11-27 07:28:58.842565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.805 [2024-11-27 07:28:58.842569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.805 [2024-11-27 07:28:58.842579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.805 qpair failed and we were unable to recover it. 00:33:47.805 [2024-11-27 07:28:58.852554] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.805 [2024-11-27 07:28:58.852598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.805 [2024-11-27 07:28:58.852608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.805 [2024-11-27 07:28:58.852613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.805 [2024-11-27 07:28:58.852617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.805 [2024-11-27 07:28:58.852627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.805 qpair failed and we were unable to recover it. 00:33:47.805 [2024-11-27 07:28:58.862582] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.805 [2024-11-27 07:28:58.862634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.805 [2024-11-27 07:28:58.862644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.806 [2024-11-27 07:28:58.862649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.806 [2024-11-27 07:28:58.862653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.806 [2024-11-27 07:28:58.862663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.806 qpair failed and we were unable to recover it. 00:33:47.806 [2024-11-27 07:28:58.872597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.806 [2024-11-27 07:28:58.872687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.806 [2024-11-27 07:28:58.872697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.806 [2024-11-27 07:28:58.872702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.806 [2024-11-27 07:28:58.872706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.806 [2024-11-27 07:28:58.872719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.806 qpair failed and we were unable to recover it. 00:33:47.806 [2024-11-27 07:28:58.882598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.806 [2024-11-27 07:28:58.882642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.806 [2024-11-27 07:28:58.882653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.806 [2024-11-27 07:28:58.882658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.806 [2024-11-27 07:28:58.882663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.806 [2024-11-27 07:28:58.882673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.806 qpair failed and we were unable to recover it. 00:33:47.806 [2024-11-27 07:28:58.892623] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.806 [2024-11-27 07:28:58.892663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.806 [2024-11-27 07:28:58.892673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.806 [2024-11-27 07:28:58.892678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.806 [2024-11-27 07:28:58.892683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.806 [2024-11-27 07:28:58.892693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.806 qpair failed and we were unable to recover it. 00:33:47.806 [2024-11-27 07:28:58.902552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.806 [2024-11-27 07:28:58.902598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.806 [2024-11-27 07:28:58.902609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.806 [2024-11-27 07:28:58.902614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.806 [2024-11-27 07:28:58.902618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.806 [2024-11-27 07:28:58.902629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.806 qpair failed and we were unable to recover it. 00:33:47.806 [2024-11-27 07:28:58.912712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.806 [2024-11-27 07:28:58.912751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.806 [2024-11-27 07:28:58.912761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.806 [2024-11-27 07:28:58.912766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.806 [2024-11-27 07:28:58.912770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.806 [2024-11-27 07:28:58.912780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.806 qpair failed and we were unable to recover it. 00:33:47.806 [2024-11-27 07:28:58.922709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.806 [2024-11-27 07:28:58.922796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.806 [2024-11-27 07:28:58.922806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.806 [2024-11-27 07:28:58.922811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.806 [2024-11-27 07:28:58.922815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.806 [2024-11-27 07:28:58.922825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.806 qpair failed and we were unable to recover it. 00:33:47.806 [2024-11-27 07:28:58.932743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.806 [2024-11-27 07:28:58.932813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.806 [2024-11-27 07:28:58.932823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.806 [2024-11-27 07:28:58.932828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.806 [2024-11-27 07:28:58.932832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.806 [2024-11-27 07:28:58.932842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.806 qpair failed and we were unable to recover it. 00:33:47.806 [2024-11-27 07:28:58.942782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.806 [2024-11-27 07:28:58.942826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.806 [2024-11-27 07:28:58.942836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.806 [2024-11-27 07:28:58.942841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.806 [2024-11-27 07:28:58.942845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.806 [2024-11-27 07:28:58.942856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.806 qpair failed and we were unable to recover it. 00:33:47.806 [2024-11-27 07:28:58.952811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.806 [2024-11-27 07:28:58.952850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.806 [2024-11-27 07:28:58.952860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.806 [2024-11-27 07:28:58.952864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.806 [2024-11-27 07:28:58.952869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.806 [2024-11-27 07:28:58.952879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.806 qpair failed and we were unable to recover it. 00:33:47.806 [2024-11-27 07:28:58.962839] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.806 [2024-11-27 07:28:58.962881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.806 [2024-11-27 07:28:58.962890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.806 [2024-11-27 07:28:58.962898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.806 [2024-11-27 07:28:58.962902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.806 [2024-11-27 07:28:58.962912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.806 qpair failed and we were unable to recover it. 00:33:47.806 [2024-11-27 07:28:58.972866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.806 [2024-11-27 07:28:58.972908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.806 [2024-11-27 07:28:58.972918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.806 [2024-11-27 07:28:58.972923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.806 [2024-11-27 07:28:58.972928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.806 [2024-11-27 07:28:58.972938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.806 qpair failed and we were unable to recover it. 00:33:47.806 [2024-11-27 07:28:58.982907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.806 [2024-11-27 07:28:58.982948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.806 [2024-11-27 07:28:58.982957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.806 [2024-11-27 07:28:58.982963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.806 [2024-11-27 07:28:58.982967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.806 [2024-11-27 07:28:58.982977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.806 qpair failed and we were unable to recover it. 00:33:47.806 [2024-11-27 07:28:58.992909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.806 [2024-11-27 07:28:58.992948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.806 [2024-11-27 07:28:58.992958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.806 [2024-11-27 07:28:58.992963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.807 [2024-11-27 07:28:58.992967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.807 [2024-11-27 07:28:58.992977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.807 qpair failed and we were unable to recover it. 00:33:47.807 [2024-11-27 07:28:59.002803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.807 [2024-11-27 07:28:59.002842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.807 [2024-11-27 07:28:59.002852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.807 [2024-11-27 07:28:59.002857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.807 [2024-11-27 07:28:59.002862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:47.807 [2024-11-27 07:28:59.002877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:47.807 qpair failed and we were unable to recover it. 00:33:48.112 [2024-11-27 07:28:59.012969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.112 [2024-11-27 07:28:59.013010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.112 [2024-11-27 07:28:59.013021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.112 [2024-11-27 07:28:59.013026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.112 [2024-11-27 07:28:59.013030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:48.112 [2024-11-27 07:28:59.013040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:48.112 qpair failed and we were unable to recover it. 00:33:48.112 [2024-11-27 07:28:59.023019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.112 [2024-11-27 07:28:59.023056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.112 [2024-11-27 07:28:59.023066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.113 [2024-11-27 07:28:59.023071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.113 [2024-11-27 07:28:59.023075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:48.113 [2024-11-27 07:28:59.023085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:48.113 qpair failed and we were unable to recover it. 00:33:48.113 [2024-11-27 07:28:59.033024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.113 [2024-11-27 07:28:59.033062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.113 [2024-11-27 07:28:59.033071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.113 [2024-11-27 07:28:59.033076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.113 [2024-11-27 07:28:59.033081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b6c000b90 00:33:48.113 [2024-11-27 07:28:59.033090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:48.113 qpair failed and we were unable to recover it. 00:33:48.113 [2024-11-27 07:28:59.043053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.113 [2024-11-27 07:28:59.043144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.113 [2024-11-27 07:28:59.043221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.113 [2024-11-27 07:28:59.043247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.113 [2024-11-27 07:28:59.043268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18520c0 00:33:48.113 [2024-11-27 07:28:59.043322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:48.113 qpair failed and we were unable to recover it. 00:33:48.113 [2024-11-27 07:28:59.053087] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.113 [2024-11-27 07:28:59.053167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.113 [2024-11-27 07:28:59.053198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.113 [2024-11-27 07:28:59.053215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.113 [2024-11-27 07:28:59.053230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18520c0 00:33:48.113 [2024-11-27 07:28:59.053261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:48.113 qpair failed and we were unable to recover it. 00:33:48.113 [2024-11-27 07:28:59.063119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.113 [2024-11-27 07:28:59.063229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.113 [2024-11-27 07:28:59.063294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.113 [2024-11-27 07:28:59.063320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.113 [2024-11-27 07:28:59.063342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b68000b90 00:33:48.113 [2024-11-27 07:28:59.063399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:48.113 qpair failed and we were unable to recover it. 00:33:48.113 [2024-11-27 07:28:59.073125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.113 [2024-11-27 07:28:59.073224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.113 [2024-11-27 07:28:59.073255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.113 [2024-11-27 07:28:59.073270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.113 [2024-11-27 07:28:59.073285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b68000b90 00:33:48.113 [2024-11-27 07:28:59.073317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:48.113 qpair failed and we were unable to recover it. 00:33:48.113 [2024-11-27 07:28:59.083177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.113 [2024-11-27 07:28:59.083270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.113 [2024-11-27 07:28:59.083334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.113 [2024-11-27 07:28:59.083361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.113 [2024-11-27 07:28:59.083383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b74000b90 00:33:48.113 [2024-11-27 07:28:59.083438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.113 qpair failed and we were unable to recover it. 00:33:48.113 [2024-11-27 07:28:59.093193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.113 [2024-11-27 07:28:59.093269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.113 [2024-11-27 07:28:59.093308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.113 [2024-11-27 07:28:59.093324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.113 [2024-11-27 07:28:59.093339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b74000b90 00:33:48.113 [2024-11-27 07:28:59.093372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.113 qpair failed and we were unable to recover it. 00:33:48.113 [2024-11-27 07:28:59.093535] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:33:48.113 A controller has encountered a failure and is being reset. 00:33:48.113 [2024-11-27 07:28:59.093651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1847e10 (9): Bad file descriptor 00:33:48.113 Controller properly reset. 00:33:48.113 Initializing NVMe Controllers 00:33:48.113 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:48.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:48.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:33:48.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:33:48.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:33:48.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:33:48.113 Initialization complete. Launching workers. 00:33:48.113 Starting thread on core 1 00:33:48.113 Starting thread on core 2 00:33:48.113 Starting thread on core 3 00:33:48.113 Starting thread on core 0 00:33:48.113 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:33:48.113 00:33:48.113 real 0m11.592s 00:33:48.113 user 0m21.993s 00:33:48.113 sys 0m3.843s 00:33:48.113 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:48.113 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:48.113 ************************************ 00:33:48.113 END TEST nvmf_target_disconnect_tc2 00:33:48.113 ************************************ 00:33:48.113 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:33:48.113 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:33:48.113 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:33:48.113 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:48.113 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:33:48.113 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:48.113 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:33:48.113 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:48.113 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:48.404 rmmod nvme_tcp 00:33:48.404 rmmod nvme_fabrics 00:33:48.404 rmmod nvme_keyring 00:33:48.404 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:48.404 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:33:48.404 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:33:48.404 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2586878 ']' 00:33:48.404 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2586878 00:33:48.404 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2586878 ']' 00:33:48.404 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2586878 00:33:48.404 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:33:48.404 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:48.404 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2586878 00:33:48.404 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:33:48.404 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:33:48.404 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2586878' 00:33:48.404 killing process with pid 2586878 00:33:48.404 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2586878 00:33:48.404 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2586878 00:33:48.404 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:48.404 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:48.404 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:48.404 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:33:48.404 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:33:48.404 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:33:48.404 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:48.404 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:48.404 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:48.404 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:48.404 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:48.404 07:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:50.947 07:29:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:50.947 00:33:50.947 real 0m22.000s 00:33:50.947 user 0m50.399s 00:33:50.947 sys 0m10.086s 00:33:50.947 07:29:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:50.947 07:29:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:50.947 ************************************ 00:33:50.947 END TEST nvmf_target_disconnect 00:33:50.947 ************************************ 00:33:50.947 07:29:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:33:50.947 00:33:50.947 real 6m33.923s 00:33:50.947 user 11m24.788s 00:33:50.947 sys 2m15.700s 00:33:50.947 07:29:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:50.947 07:29:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.947 ************************************ 00:33:50.947 END TEST nvmf_host 00:33:50.947 ************************************ 00:33:50.947 07:29:01 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:33:50.947 07:29:01 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:33:50.947 07:29:01 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:33:50.947 07:29:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:50.947 07:29:01 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:50.947 07:29:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:50.947 ************************************ 00:33:50.947 START TEST nvmf_target_core_interrupt_mode 00:33:50.947 ************************************ 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:33:50.947 * Looking for test storage... 00:33:50.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:50.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.947 --rc genhtml_branch_coverage=1 00:33:50.947 --rc genhtml_function_coverage=1 00:33:50.947 --rc genhtml_legend=1 00:33:50.947 --rc geninfo_all_blocks=1 00:33:50.947 --rc geninfo_unexecuted_blocks=1 00:33:50.947 00:33:50.947 ' 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:50.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.947 --rc genhtml_branch_coverage=1 00:33:50.947 --rc genhtml_function_coverage=1 00:33:50.947 --rc genhtml_legend=1 00:33:50.947 --rc geninfo_all_blocks=1 00:33:50.947 --rc geninfo_unexecuted_blocks=1 00:33:50.947 00:33:50.947 ' 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:50.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.947 --rc genhtml_branch_coverage=1 00:33:50.947 --rc genhtml_function_coverage=1 00:33:50.947 --rc genhtml_legend=1 00:33:50.947 --rc geninfo_all_blocks=1 00:33:50.947 --rc geninfo_unexecuted_blocks=1 00:33:50.947 00:33:50.947 ' 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:50.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.947 --rc genhtml_branch_coverage=1 00:33:50.947 --rc genhtml_function_coverage=1 00:33:50.947 --rc genhtml_legend=1 00:33:50.947 --rc geninfo_all_blocks=1 00:33:50.947 --rc geninfo_unexecuted_blocks=1 00:33:50.947 00:33:50.947 ' 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:50.947 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.948 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.948 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.948 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:33:50.948 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.948 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:33:50.948 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:50.948 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:50.948 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:50.948 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:50.948 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:50.948 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:50.948 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:50.948 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:50.948 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:50.948 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:50.948 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:33:50.948 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:33:50.948 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:33:50.948 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:33:50.948 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:50.948 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:50.948 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:50.948 ************************************ 00:33:50.948 START TEST nvmf_abort 00:33:50.948 ************************************ 00:33:50.948 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:33:50.948 * Looking for test storage... 00:33:50.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:51.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.210 --rc genhtml_branch_coverage=1 00:33:51.210 --rc genhtml_function_coverage=1 00:33:51.210 --rc genhtml_legend=1 00:33:51.210 --rc geninfo_all_blocks=1 00:33:51.210 --rc geninfo_unexecuted_blocks=1 00:33:51.210 00:33:51.210 ' 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:51.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.210 --rc genhtml_branch_coverage=1 00:33:51.210 --rc genhtml_function_coverage=1 00:33:51.210 --rc genhtml_legend=1 00:33:51.210 --rc geninfo_all_blocks=1 00:33:51.210 --rc geninfo_unexecuted_blocks=1 00:33:51.210 00:33:51.210 ' 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:51.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.210 --rc genhtml_branch_coverage=1 00:33:51.210 --rc genhtml_function_coverage=1 00:33:51.210 --rc genhtml_legend=1 00:33:51.210 --rc geninfo_all_blocks=1 00:33:51.210 --rc geninfo_unexecuted_blocks=1 00:33:51.210 00:33:51.210 ' 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:51.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.210 --rc genhtml_branch_coverage=1 00:33:51.210 --rc genhtml_function_coverage=1 00:33:51.210 --rc genhtml_legend=1 00:33:51.210 --rc geninfo_all_blocks=1 00:33:51.210 --rc geninfo_unexecuted_blocks=1 00:33:51.210 00:33:51.210 ' 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:51.210 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:51.211 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:51.211 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:51.211 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:33:51.211 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:33:51.211 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:51.211 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:51.211 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:51.211 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:51.211 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:51.211 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:51.211 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:51.211 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:51.211 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:51.211 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:51.211 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:33:51.211 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:59.356 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:59.356 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:59.356 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:59.356 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:59.356 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:59.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:59.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.604 ms 00:33:59.357 00:33:59.357 --- 10.0.0.2 ping statistics --- 00:33:59.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:59.357 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:59.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:59.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:33:59.357 00:33:59.357 --- 10.0.0.1 ping statistics --- 00:33:59.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:59.357 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2592332 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2592332 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2592332 ']' 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:59.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:59.357 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:59.357 [2024-11-27 07:29:09.877701] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:59.357 [2024-11-27 07:29:09.878832] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:33:59.357 [2024-11-27 07:29:09.878883] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:59.357 [2024-11-27 07:29:09.982569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:59.357 [2024-11-27 07:29:10.038500] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:59.357 [2024-11-27 07:29:10.038556] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:59.357 [2024-11-27 07:29:10.038567] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:59.357 [2024-11-27 07:29:10.038575] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:59.357 [2024-11-27 07:29:10.038582] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:59.357 [2024-11-27 07:29:10.040288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:59.357 [2024-11-27 07:29:10.040581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:59.357 [2024-11-27 07:29:10.040584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:59.357 [2024-11-27 07:29:10.127129] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:59.357 [2024-11-27 07:29:10.128325] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:59.357 [2024-11-27 07:29:10.128529] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:59.357 [2024-11-27 07:29:10.128714] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:59.618 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:59.618 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:33:59.618 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:59.618 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:59.618 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:59.618 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:59.618 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:33:59.618 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.618 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:59.618 [2024-11-27 07:29:10.745639] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:59.618 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.618 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:33:59.618 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.618 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:59.618 Malloc0 00:33:59.618 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.618 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:59.618 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.618 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:59.618 Delay0 00:33:59.618 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.618 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:59.618 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.618 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:59.880 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.880 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:33:59.880 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.880 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:59.880 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.880 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:59.880 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.880 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:59.880 [2024-11-27 07:29:10.841568] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:59.880 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.880 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:59.880 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.880 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:59.880 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.880 07:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:33:59.880 [2024-11-27 07:29:10.984981] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:34:02.430 Initializing NVMe Controllers 00:34:02.430 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:34:02.430 controller IO queue size 128 less than required 00:34:02.430 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:34:02.430 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:34:02.430 Initialization complete. Launching workers. 00:34:02.430 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28518 00:34:02.430 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28575, failed to submit 66 00:34:02.430 success 28518, unsuccessful 57, failed 0 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:02.430 rmmod nvme_tcp 00:34:02.430 rmmod nvme_fabrics 00:34:02.430 rmmod nvme_keyring 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2592332 ']' 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2592332 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2592332 ']' 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2592332 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2592332 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2592332' 00:34:02.430 killing process with pid 2592332 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2592332 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2592332 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:02.430 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:04.353 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:04.353 00:34:04.353 real 0m13.399s 00:34:04.353 user 0m10.817s 00:34:04.353 sys 0m7.013s 00:34:04.353 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:04.353 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:04.353 ************************************ 00:34:04.353 END TEST nvmf_abort 00:34:04.353 ************************************ 00:34:04.353 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:34:04.353 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:04.353 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:04.353 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:04.353 ************************************ 00:34:04.353 START TEST nvmf_ns_hotplug_stress 00:34:04.353 ************************************ 00:34:04.353 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:34:04.614 * Looking for test storage... 00:34:04.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:04.614 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:04.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.615 --rc genhtml_branch_coverage=1 00:34:04.615 --rc genhtml_function_coverage=1 00:34:04.615 --rc genhtml_legend=1 00:34:04.615 --rc geninfo_all_blocks=1 00:34:04.615 --rc geninfo_unexecuted_blocks=1 00:34:04.615 00:34:04.615 ' 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:04.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.615 --rc genhtml_branch_coverage=1 00:34:04.615 --rc genhtml_function_coverage=1 00:34:04.615 --rc genhtml_legend=1 00:34:04.615 --rc geninfo_all_blocks=1 00:34:04.615 --rc geninfo_unexecuted_blocks=1 00:34:04.615 00:34:04.615 ' 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:04.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.615 --rc genhtml_branch_coverage=1 00:34:04.615 --rc genhtml_function_coverage=1 00:34:04.615 --rc genhtml_legend=1 00:34:04.615 --rc geninfo_all_blocks=1 00:34:04.615 --rc geninfo_unexecuted_blocks=1 00:34:04.615 00:34:04.615 ' 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:04.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.615 --rc genhtml_branch_coverage=1 00:34:04.615 --rc genhtml_function_coverage=1 00:34:04.615 --rc genhtml_legend=1 00:34:04.615 --rc geninfo_all_blocks=1 00:34:04.615 --rc geninfo_unexecuted_blocks=1 00:34:04.615 00:34:04.615 ' 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.615 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:34:04.616 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.616 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:34:04.616 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:04.616 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:04.616 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:04.616 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:04.616 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:04.616 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:04.616 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:04.616 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:04.616 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:04.616 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:04.616 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:04.616 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:34:04.616 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:04.616 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:04.616 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:04.616 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:04.616 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:04.616 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:04.616 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:04.616 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:04.616 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:04.616 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:04.616 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:34:04.616 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:34:12.753 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:12.753 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:34:12.753 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:12.753 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:12.753 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:12.753 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:12.753 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:12.753 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:34:12.753 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:12.753 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:34:12.753 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:34:12.753 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:34:12.753 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:34:12.753 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:34:12.753 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:34:12.753 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:12.753 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:12.753 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:12.753 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:12.753 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:12.753 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:12.753 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:12.753 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:12.753 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:12.753 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:12.753 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:12.753 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:12.753 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:12.753 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:12.753 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:12.753 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:12.753 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:12.753 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:12.753 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:12.754 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:12.754 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:12.754 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:12.754 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:12.754 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:12.754 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:12.754 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:12.754 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:12.754 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:12.754 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:12.754 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:12.754 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:12.754 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:12.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:12.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:34:12.754 00:34:12.754 --- 10.0.0.2 ping statistics --- 00:34:12.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:12.754 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:34:12.754 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:12.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:12.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:34:12.754 00:34:12.754 --- 10.0.0.1 ping statistics --- 00:34:12.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:12.754 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:34:12.754 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:12.754 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:34:12.754 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:12.754 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:12.754 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:12.754 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:12.754 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:12.754 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:12.754 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:12.754 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:34:12.754 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:12.754 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:12.754 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:34:12.754 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2597293 00:34:12.754 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2597293 00:34:12.754 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:34:12.754 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2597293 ']' 00:34:12.754 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:12.755 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:12.755 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:12.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:12.755 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:12.755 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:34:12.755 [2024-11-27 07:29:23.361971] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:12.755 [2024-11-27 07:29:23.363103] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:34:12.755 [2024-11-27 07:29:23.363153] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:12.755 [2024-11-27 07:29:23.463212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:12.755 [2024-11-27 07:29:23.514501] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:12.755 [2024-11-27 07:29:23.514557] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:12.755 [2024-11-27 07:29:23.514565] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:12.755 [2024-11-27 07:29:23.514573] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:12.755 [2024-11-27 07:29:23.514585] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:12.755 [2024-11-27 07:29:23.516421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:12.755 [2024-11-27 07:29:23.516680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:12.755 [2024-11-27 07:29:23.516681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:12.755 [2024-11-27 07:29:23.596028] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:12.755 [2024-11-27 07:29:23.597099] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:12.755 [2024-11-27 07:29:23.597413] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:12.755 [2024-11-27 07:29:23.597597] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:13.015 07:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:13.015 07:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:34:13.015 07:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:13.015 07:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:13.015 07:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:34:13.275 07:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:13.275 07:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:34:13.275 07:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:13.275 [2024-11-27 07:29:24.413744] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:13.275 07:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:13.535 07:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:13.795 [2024-11-27 07:29:24.798557] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:13.795 07:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:14.056 07:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:34:14.056 Malloc0 00:34:14.056 07:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:14.316 Delay0 00:34:14.316 07:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:14.577 07:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:34:14.577 NULL1 00:34:14.577 07:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:34:14.837 07:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2597693 00:34:14.837 07:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2597693 00:34:14.837 07:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:34:14.837 07:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:16.222 Read completed with error (sct=0, sc=11) 00:34:16.222 07:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:16.222 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:16.222 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:16.222 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:16.222 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:16.222 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:16.222 07:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:34:16.222 07:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:34:16.483 true 00:34:16.483 07:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2597693 00:34:16.483 07:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:17.426 07:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:17.426 07:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:34:17.426 07:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:34:17.687 true 00:34:17.687 07:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2597693 00:34:17.687 07:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:17.948 07:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:17.948 07:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:34:17.948 07:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:34:18.218 true 00:34:18.218 07:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2597693 00:34:18.218 07:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:19.608 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:19.608 07:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:19.608 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:19.608 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:19.608 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:19.608 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:19.608 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:19.608 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:19.608 07:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:34:19.608 07:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:34:19.608 true 00:34:19.608 07:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2597693 00:34:19.608 07:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:20.549 07:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:20.549 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:20.809 07:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:34:20.809 07:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:34:20.809 true 00:34:20.809 07:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2597693 00:34:20.810 07:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:21.070 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:21.331 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:34:21.331 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:34:21.331 true 00:34:21.331 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2597693 00:34:21.331 07:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:22.715 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:22.716 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:22.716 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:22.716 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:22.716 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:22.716 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:22.716 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:22.716 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:22.716 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:34:22.716 07:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:34:22.976 true 00:34:22.976 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2597693 00:34:22.976 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:23.918 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:23.918 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:34:23.918 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:34:24.178 true 00:34:24.178 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2597693 00:34:24.178 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:24.178 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:24.439 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:34:24.439 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:34:24.700 true 00:34:24.700 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2597693 00:34:24.700 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:25.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:25.642 07:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:25.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:25.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:25.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:25.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:25.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:25.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:25.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:25.902 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:34:25.903 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:34:26.163 true 00:34:26.163 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2597693 00:34:26.163 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:27.104 07:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:27.104 07:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:34:27.104 07:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:34:27.365 true 00:34:27.365 07:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2597693 00:34:27.365 07:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:27.626 07:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:27.626 07:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:34:27.626 07:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:34:27.887 true 00:34:27.887 07:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2597693 00:34:27.887 07:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:29.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:29.279 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:29.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:29.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:29.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:29.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:29.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:29.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:29.279 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:34:29.279 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:34:29.279 true 00:34:29.279 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2597693 00:34:29.279 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:30.221 07:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:30.482 07:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:34:30.482 07:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:34:30.482 true 00:34:30.482 07:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2597693 00:34:30.482 07:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:30.742 07:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:31.002 07:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:34:31.002 07:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:34:31.002 true 00:34:31.002 07:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2597693 00:34:31.002 07:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:32.386 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:32.386 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:32.386 07:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:32.386 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:32.386 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:32.386 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:32.386 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:32.386 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:32.386 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:32.386 07:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:34:32.386 07:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:34:32.646 true 00:34:32.646 07:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2597693 00:34:32.646 07:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:33.587 07:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:33.588 07:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:34:33.588 07:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:34:33.849 true 00:34:33.849 07:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2597693 00:34:33.849 07:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:33.849 07:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:34.109 07:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:34:34.110 07:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:34:34.370 true 00:34:34.370 07:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2597693 00:34:34.370 07:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:35.312 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:35.312 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:35.312 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:35.573 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:35.573 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:35.573 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:35.573 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:35.573 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:35.573 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:34:35.573 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:34:35.833 true 00:34:35.833 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2597693 00:34:35.833 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:36.777 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:36.777 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:34:36.777 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:34:37.037 true 00:34:37.037 07:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2597693 00:34:37.037 07:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:37.298 07:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:37.298 07:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:34:37.298 07:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:34:37.558 true 00:34:37.558 07:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2597693 00:34:37.558 07:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:38.943 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:38.943 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:38.943 07:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:38.943 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:38.943 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:38.943 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:38.943 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:38.943 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:38.943 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:38.943 07:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:34:38.943 07:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:34:38.943 true 00:34:38.943 07:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2597693 00:34:38.943 07:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:39.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:39.885 07:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:39.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:40.147 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:34:40.147 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:34:40.147 true 00:34:40.147 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2597693 00:34:40.147 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:40.408 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:40.668 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:34:40.668 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:34:40.668 true 00:34:40.929 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2597693 00:34:40.929 07:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:41.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:41.871 07:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:41.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:42.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:42.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:42.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:42.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:42.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:42.133 07:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:34:42.133 07:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:34:42.393 true 00:34:42.393 07:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2597693 00:34:42.393 07:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:43.336 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:43.336 07:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:43.336 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:43.336 07:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:34:43.336 07:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:34:43.597 true 00:34:43.597 07:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2597693 00:34:43.597 07:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:43.857 07:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:43.857 07:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:34:43.857 07:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:34:44.117 true 00:34:44.117 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2597693 00:34:44.117 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:44.117 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:44.377 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:34:44.377 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:34:44.638 true 00:34:44.638 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2597693 00:34:44.638 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:44.898 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:44.898 07:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:34:44.898 07:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:34:45.159 true 00:34:45.159 07:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2597693 00:34:45.159 07:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:45.159 Initializing NVMe Controllers 00:34:45.159 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:45.159 Controller IO queue size 128, less than required. 00:34:45.159 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:45.159 Controller IO queue size 128, less than required. 00:34:45.160 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:45.160 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:45.160 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:34:45.160 Initialization complete. Launching workers. 00:34:45.160 ======================================================== 00:34:45.160 Latency(us) 00:34:45.160 Device Information : IOPS MiB/s Average min max 00:34:45.160 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2386.52 1.17 36513.53 1467.78 1014191.80 00:34:45.160 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 19980.01 9.76 6406.09 1134.74 400273.97 00:34:45.160 ======================================================== 00:34:45.160 Total : 22366.53 10.92 9618.57 1134.74 1014191.80 00:34:45.160 00:34:45.420 07:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:45.420 07:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:34:45.420 07:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:34:45.680 true 00:34:45.680 07:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2597693 00:34:45.680 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2597693) - No such process 00:34:45.680 07:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2597693 00:34:45.680 07:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:45.940 07:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:45.940 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:34:45.940 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:34:45.940 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:34:45.940 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:45.940 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:34:46.200 null0 00:34:46.200 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:46.200 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:46.200 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:34:46.200 null1 00:34:46.200 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:46.200 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:46.201 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:34:46.484 null2 00:34:46.484 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:46.484 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:46.484 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:34:46.484 null3 00:34:46.484 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:46.484 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:46.484 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:34:46.766 null4 00:34:46.766 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:46.766 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:46.767 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:34:47.030 null5 00:34:47.030 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:47.030 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:47.030 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:34:47.030 null6 00:34:47.030 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:47.030 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:47.030 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:34:47.291 null7 00:34:47.291 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:47.291 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:47.291 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:34:47.291 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:47.291 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:34:47.291 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:47.291 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:34:47.291 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:47.291 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:47.291 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:47.291 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:47.291 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:47.291 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:47.291 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:47.291 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2603974 2603977 2603979 2603982 2603984 2603987 2603990 2603993 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:47.292 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:47.554 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:47.554 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:47.554 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:47.554 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:47.554 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:47.554 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:47.554 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:47.554 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:47.554 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:47.554 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:47.554 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:47.554 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:47.554 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:47.554 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:47.850 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:47.850 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:47.850 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:47.850 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:47.850 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:47.850 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:47.850 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:47.850 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:47.850 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:47.850 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:47.850 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:47.850 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:47.850 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:47.850 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:47.850 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:47.851 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:47.851 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:47.851 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:47.851 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:47.851 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:47.851 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:47.851 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:47.851 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:47.851 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:48.112 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:48.112 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:48.112 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:48.112 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:48.112 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:48.112 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:48.112 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:48.112 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:48.112 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:48.112 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:48.112 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:48.112 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:48.112 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:48.112 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:48.112 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:48.112 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:48.112 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:48.112 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:48.112 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:48.112 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:48.112 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:48.112 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:48.112 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:48.112 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:48.112 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:48.112 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:48.112 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:48.112 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:48.372 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:48.372 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:48.372 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:48.372 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:48.372 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:48.372 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:48.372 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:48.372 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:48.372 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:48.372 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:48.372 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:48.372 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:48.372 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:48.372 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:48.372 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:48.372 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:48.372 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:48.372 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:48.372 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:48.372 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:48.372 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:48.372 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:48.372 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:48.372 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:48.372 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:48.372 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:48.372 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:48.372 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:48.372 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:48.372 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:48.372 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:48.633 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:48.633 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:48.633 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:48.633 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:48.633 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:48.633 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:48.633 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:48.633 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:48.633 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:48.633 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:48.633 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:48.633 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:48.633 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:48.893 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:48.893 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:48.893 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:48.893 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:48.893 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:48.893 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:48.893 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:48.893 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:48.893 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:48.893 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:48.893 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:48.893 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:48.893 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:48.893 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:48.893 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:48.893 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:48.893 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:48.893 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:48.893 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:48.893 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:48.893 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:48.893 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:48.893 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:48.893 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:48.893 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:48.893 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:49.155 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:49.155 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:49.155 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:49.155 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:49.155 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:49.155 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:49.155 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:49.155 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:49.155 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:49.155 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:49.155 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:49.155 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:49.155 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:49.155 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:49.155 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:49.155 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:49.155 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:49.155 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:49.155 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:49.155 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:49.155 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:49.155 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:49.155 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:49.155 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:49.155 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:49.155 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:49.436 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:49.436 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:49.437 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:49.437 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:49.437 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:49.437 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:49.437 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:49.437 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:49.437 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:49.437 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:49.437 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:49.437 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:49.437 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:49.437 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:49.437 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:49.437 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:49.437 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:49.437 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:49.437 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:49.697 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:49.697 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:49.697 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:49.697 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:49.697 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:49.697 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:49.697 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:49.697 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:49.697 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:49.697 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:49.697 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:49.697 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:49.697 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:49.697 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:49.697 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:49.697 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:49.697 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:49.697 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:49.697 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:49.697 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:49.697 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:49.697 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:49.697 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:49.697 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:49.958 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:49.958 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:49.958 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:49.958 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:49.958 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:49.958 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:49.958 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:49.958 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:49.958 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:49.958 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:49.958 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:49.958 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:49.958 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:49.958 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:49.958 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:49.958 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:49.958 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:49.958 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:49.958 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:49.958 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:49.958 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:49.958 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:50.219 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:50.219 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:50.219 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:50.219 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:50.219 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:50.219 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:50.219 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:50.219 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:50.219 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:50.219 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:50.219 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:50.219 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:50.219 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:50.219 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:50.219 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:50.219 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:50.219 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:50.219 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:50.219 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:50.219 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:50.219 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:50.219 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:50.480 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:50.480 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:50.480 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:50.480 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:50.480 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:50.480 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:50.480 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:50.480 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:50.480 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:50.480 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:50.480 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:50.480 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:50.480 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:50.480 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:50.480 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:50.480 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:50.480 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:50.480 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:50.480 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:50.480 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:50.480 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:50.480 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:50.480 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:50.480 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:50.740 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:50.740 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:50.740 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:50.740 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:50.740 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:50.740 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:50.740 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:50.740 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:50.740 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:50.740 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:50.740 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:50.740 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:50.740 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:50.740 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:50.740 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:50.740 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:50.740 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:50.740 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:50.740 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:50.740 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:50.740 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:51.000 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:51.000 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:51.000 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:51.000 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:51.000 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:51.001 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:51.001 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:51.001 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:51.001 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:51.001 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:51.001 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:51.001 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:51.001 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:51.001 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:51.001 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:51.001 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:51.001 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:51.001 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:51.001 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:34:51.001 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:34:51.001 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:51.001 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:34:51.001 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:51.261 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:34:51.261 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:51.261 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:51.261 rmmod nvme_tcp 00:34:51.261 rmmod nvme_fabrics 00:34:51.261 rmmod nvme_keyring 00:34:51.261 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:51.261 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:34:51.261 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:34:51.261 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2597293 ']' 00:34:51.261 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2597293 00:34:51.261 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2597293 ']' 00:34:51.261 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2597293 00:34:51.261 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:34:51.261 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:51.261 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2597293 00:34:51.261 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:51.261 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:51.261 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2597293' 00:34:51.261 killing process with pid 2597293 00:34:51.261 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2597293 00:34:51.261 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2597293 00:34:51.261 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:51.261 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:51.261 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:51.261 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:34:51.261 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:34:51.261 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:51.261 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:34:51.262 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:51.262 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:51.262 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:51.262 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:51.262 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:53.804 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:53.804 00:34:53.804 real 0m49.007s 00:34:53.804 user 2m55.148s 00:34:53.804 sys 0m19.928s 00:34:53.804 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:53.804 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:34:53.804 ************************************ 00:34:53.804 END TEST nvmf_ns_hotplug_stress 00:34:53.804 ************************************ 00:34:53.804 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:34:53.804 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:53.805 ************************************ 00:34:53.805 START TEST nvmf_delete_subsystem 00:34:53.805 ************************************ 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:34:53.805 * Looking for test storage... 00:34:53.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:53.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.805 --rc genhtml_branch_coverage=1 00:34:53.805 --rc genhtml_function_coverage=1 00:34:53.805 --rc genhtml_legend=1 00:34:53.805 --rc geninfo_all_blocks=1 00:34:53.805 --rc geninfo_unexecuted_blocks=1 00:34:53.805 00:34:53.805 ' 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:53.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.805 --rc genhtml_branch_coverage=1 00:34:53.805 --rc genhtml_function_coverage=1 00:34:53.805 --rc genhtml_legend=1 00:34:53.805 --rc geninfo_all_blocks=1 00:34:53.805 --rc geninfo_unexecuted_blocks=1 00:34:53.805 00:34:53.805 ' 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:53.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.805 --rc genhtml_branch_coverage=1 00:34:53.805 --rc genhtml_function_coverage=1 00:34:53.805 --rc genhtml_legend=1 00:34:53.805 --rc geninfo_all_blocks=1 00:34:53.805 --rc geninfo_unexecuted_blocks=1 00:34:53.805 00:34:53.805 ' 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:53.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.805 --rc genhtml_branch_coverage=1 00:34:53.805 --rc genhtml_function_coverage=1 00:34:53.805 --rc genhtml_legend=1 00:34:53.805 --rc geninfo_all_blocks=1 00:34:53.805 --rc geninfo_unexecuted_blocks=1 00:34:53.805 00:34:53.805 ' 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.805 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:34:53.806 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.806 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:34:53.806 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:53.806 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:53.806 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:53.806 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:53.806 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:53.806 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:53.806 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:53.806 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:53.806 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:53.806 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:53.806 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:34:53.806 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:53.806 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:53.806 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:53.806 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:53.806 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:53.806 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:53.806 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:53.806 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:53.806 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:53.806 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:53.806 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:34:53.806 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:01.944 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:01.944 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:01.945 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:01.945 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:01.945 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:01.945 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:01.945 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:01.945 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:01.945 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:01.945 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:01.945 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:01.945 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:01.945 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:01.945 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:01.945 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:01.945 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:01.945 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:01.945 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:01.945 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:01.945 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:01.945 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:01.945 07:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:01.945 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:01.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:01.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.581 ms 00:35:01.945 00:35:01.945 --- 10.0.0.2 ping statistics --- 00:35:01.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:01.945 rtt min/avg/max/mdev = 0.581/0.581/0.581/0.000 ms 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:01.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:01.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:35:01.945 00:35:01.945 --- 10.0.0.1 ping statistics --- 00:35:01.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:01.945 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:01.945 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:01.946 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:01.946 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:01.946 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:35:01.946 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:01.946 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:01.946 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:01.946 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2609534 00:35:01.946 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2609534 00:35:01.946 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:35:01.946 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2609534 ']' 00:35:01.946 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:01.946 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:01.946 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:01.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:01.946 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:01.946 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:01.946 [2024-11-27 07:30:12.412964] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:01.946 [2024-11-27 07:30:12.414106] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:35:01.946 [2024-11-27 07:30:12.414166] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:01.946 [2024-11-27 07:30:12.514395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:01.946 [2024-11-27 07:30:12.566399] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:01.946 [2024-11-27 07:30:12.566452] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:01.946 [2024-11-27 07:30:12.566460] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:01.946 [2024-11-27 07:30:12.566469] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:01.946 [2024-11-27 07:30:12.566475] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:01.946 [2024-11-27 07:30:12.568146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:01.946 [2024-11-27 07:30:12.568150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:01.946 [2024-11-27 07:30:12.646120] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:01.946 [2024-11-27 07:30:12.646823] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:01.946 [2024-11-27 07:30:12.647075] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:02.207 07:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:02.207 07:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:35:02.207 07:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:02.207 07:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:02.207 07:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:02.207 07:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:02.207 07:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:02.207 07:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.207 07:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:02.207 [2024-11-27 07:30:13.269398] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:02.207 07:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.207 07:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:35:02.207 07:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.207 07:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:02.207 07:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.207 07:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:02.207 07:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.207 07:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:02.207 [2024-11-27 07:30:13.302064] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:02.207 07:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.207 07:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:35:02.207 07:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.207 07:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:02.207 NULL1 00:35:02.207 07:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.207 07:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:35:02.207 07:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.207 07:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:02.207 Delay0 00:35:02.207 07:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.207 07:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:02.208 07:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.208 07:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:02.208 07:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.208 07:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2609884 00:35:02.208 07:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:35:02.208 07:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:35:02.208 [2024-11-27 07:30:13.408892] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:35:04.753 07:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:04.753 07:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.753 07:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:04.753 Write completed with error (sct=0, sc=8) 00:35:04.753 Write completed with error (sct=0, sc=8) 00:35:04.753 Read completed with error (sct=0, sc=8) 00:35:04.753 Read completed with error (sct=0, sc=8) 00:35:04.753 starting I/O failed: -6 00:35:04.753 Read completed with error (sct=0, sc=8) 00:35:04.753 Write completed with error (sct=0, sc=8) 00:35:04.753 Write completed with error (sct=0, sc=8) 00:35:04.753 Read completed with error (sct=0, sc=8) 00:35:04.753 starting I/O failed: -6 00:35:04.753 Write completed with error (sct=0, sc=8) 00:35:04.753 Read completed with error (sct=0, sc=8) 00:35:04.753 Read completed with error (sct=0, sc=8) 00:35:04.753 Read completed with error (sct=0, sc=8) 00:35:04.753 starting I/O failed: -6 00:35:04.753 Read completed with error (sct=0, sc=8) 00:35:04.753 Read completed with error (sct=0, sc=8) 00:35:04.753 Read completed with error (sct=0, sc=8) 00:35:04.753 Write completed with error (sct=0, sc=8) 00:35:04.753 starting I/O failed: -6 00:35:04.753 Read completed with error (sct=0, sc=8) 00:35:04.753 Read completed with error (sct=0, sc=8) 00:35:04.753 Write completed with error (sct=0, sc=8) 00:35:04.753 Read completed with error (sct=0, sc=8) 00:35:04.753 starting I/O failed: -6 00:35:04.753 Read completed with error (sct=0, sc=8) 00:35:04.753 Read completed with error (sct=0, sc=8) 00:35:04.753 Read completed with error (sct=0, sc=8) 00:35:04.753 Read completed with error (sct=0, sc=8) 00:35:04.753 starting I/O failed: -6 00:35:04.753 Read completed with error (sct=0, sc=8) 00:35:04.753 Read completed with error (sct=0, sc=8) 00:35:04.753 Read completed with error (sct=0, sc=8) 00:35:04.753 Read completed with error (sct=0, sc=8) 00:35:04.753 starting I/O failed: -6 00:35:04.753 Read completed with error (sct=0, sc=8) 00:35:04.753 Write completed with error (sct=0, sc=8) 00:35:04.753 Read completed with error (sct=0, sc=8) 00:35:04.753 Write completed with error (sct=0, sc=8) 00:35:04.753 starting I/O failed: -6 00:35:04.753 Read completed with error (sct=0, sc=8) 00:35:04.753 Read completed with error (sct=0, sc=8) 00:35:04.753 Read completed with error (sct=0, sc=8) 00:35:04.753 Read completed with error (sct=0, sc=8) 00:35:04.753 starting I/O failed: -6 00:35:04.753 Read completed with error (sct=0, sc=8) 00:35:04.753 Write completed with error (sct=0, sc=8) 00:35:04.753 Read completed with error (sct=0, sc=8) 00:35:04.753 Write completed with error (sct=0, sc=8) 00:35:04.753 starting I/O failed: -6 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 [2024-11-27 07:30:15.541374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeab2c0 is same with the state(6) to be set 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 starting I/O failed: -6 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 starting I/O failed: -6 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 starting I/O failed: -6 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 starting I/O failed: -6 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 starting I/O failed: -6 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 starting I/O failed: -6 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 starting I/O failed: -6 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 starting I/O failed: -6 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 starting I/O failed: -6 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 starting I/O failed: -6 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 [2024-11-27 07:30:15.546232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f25d800d490 is same with the state(6) to be set 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Read completed with error (sct=0, sc=8) 00:35:04.754 Write completed with error (sct=0, sc=8) 00:35:05.325 [2024-11-27 07:30:16.509506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeac9b0 is same with the state(6) to be set 00:35:05.586 Write completed with error (sct=0, sc=8) 00:35:05.586 Write completed with error (sct=0, sc=8) 00:35:05.586 Write completed with error (sct=0, sc=8) 00:35:05.586 Write completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Write completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Write completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Write completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 [2024-11-27 07:30:16.544737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeab4a0 is same with the state(6) to be set 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Write completed with error (sct=0, sc=8) 00:35:05.586 Write completed with error (sct=0, sc=8) 00:35:05.586 Write completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Write completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Write completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 [2024-11-27 07:30:16.545220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeab860 is same with the state(6) to be set 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Write completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Write completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Write completed with error (sct=0, sc=8) 00:35:05.586 Write completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 [2024-11-27 07:30:16.548724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f25d800d020 is same with the state(6) to be set 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Write completed with error (sct=0, sc=8) 00:35:05.586 Write completed with error (sct=0, sc=8) 00:35:05.586 Write completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Write completed with error (sct=0, sc=8) 00:35:05.586 Write completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Write completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 Read completed with error (sct=0, sc=8) 00:35:05.586 [2024-11-27 07:30:16.548798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f25d800d7c0 is same with the state(6) to be set 00:35:05.586 Initializing NVMe Controllers 00:35:05.586 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:05.586 Controller IO queue size 128, less than required. 00:35:05.586 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:05.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:05.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:05.586 Initialization complete. Launching workers. 00:35:05.586 ======================================================== 00:35:05.586 Latency(us) 00:35:05.586 Device Information : IOPS MiB/s Average min max 00:35:05.586 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 164.20 0.08 906946.86 371.75 1007667.07 00:35:05.586 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 158.73 0.08 920888.14 300.89 1012041.12 00:35:05.586 ======================================================== 00:35:05.586 Total : 322.94 0.16 913799.36 300.89 1012041.12 00:35:05.586 00:35:05.586 [2024-11-27 07:30:16.549473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeac9b0 (9): Bad file descriptor 00:35:05.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:35:05.587 07:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.587 07:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:35:05.587 07:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2609884 00:35:05.587 07:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:35:06.159 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:35:06.159 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2609884 00:35:06.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2609884) - No such process 00:35:06.159 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2609884 00:35:06.159 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:35:06.159 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2609884 00:35:06.159 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:35:06.159 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:06.159 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:35:06.159 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:06.159 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2609884 00:35:06.159 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:35:06.159 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:06.159 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:06.159 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:06.159 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:35:06.160 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.160 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:06.160 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.160 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:06.160 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.160 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:06.160 [2024-11-27 07:30:17.081841] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:06.160 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.160 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:06.160 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.160 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:06.160 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.160 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2610560 00:35:06.160 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:35:06.160 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:35:06.160 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2610560 00:35:06.160 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:06.160 [2024-11-27 07:30:17.182946] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:35:06.420 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:06.420 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2610560 00:35:06.420 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:06.991 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:06.991 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2610560 00:35:06.991 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:07.563 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:07.563 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2610560 00:35:07.563 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:08.135 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:08.135 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2610560 00:35:08.135 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:08.706 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:08.707 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2610560 00:35:08.707 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:08.967 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:08.967 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2610560 00:35:08.967 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:09.538 Initializing NVMe Controllers 00:35:09.538 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:09.538 Controller IO queue size 128, less than required. 00:35:09.538 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:09.538 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:09.538 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:09.538 Initialization complete. Launching workers. 00:35:09.538 ======================================================== 00:35:09.539 Latency(us) 00:35:09.539 Device Information : IOPS MiB/s Average min max 00:35:09.539 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002390.52 1000252.55 1041598.30 00:35:09.539 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004385.11 1000207.19 1043283.93 00:35:09.539 ======================================================== 00:35:09.539 Total : 256.00 0.12 1003387.82 1000207.19 1043283.93 00:35:09.539 00:35:09.539 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:09.539 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2610560 00:35:09.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2610560) - No such process 00:35:09.539 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2610560 00:35:09.539 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:35:09.539 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:35:09.539 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:09.539 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:35:09.539 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:09.539 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:35:09.539 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:09.539 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:09.539 rmmod nvme_tcp 00:35:09.539 rmmod nvme_fabrics 00:35:09.539 rmmod nvme_keyring 00:35:09.539 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:09.539 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:35:09.539 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:35:09.539 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2609534 ']' 00:35:09.539 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2609534 00:35:09.539 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2609534 ']' 00:35:09.539 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2609534 00:35:09.539 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:35:09.539 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:09.539 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2609534 00:35:09.800 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:09.800 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:09.800 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2609534' 00:35:09.800 killing process with pid 2609534 00:35:09.800 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2609534 00:35:09.800 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2609534 00:35:09.800 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:09.800 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:09.800 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:09.800 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:35:09.800 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:35:09.800 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:09.800 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:35:09.800 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:09.800 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:09.800 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:09.800 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:09.800 07:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:12.343 07:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:12.343 00:35:12.343 real 0m18.339s 00:35:12.343 user 0m26.623s 00:35:12.343 sys 0m7.508s 00:35:12.343 07:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:12.343 07:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:12.343 ************************************ 00:35:12.343 END TEST nvmf_delete_subsystem 00:35:12.343 ************************************ 00:35:12.343 07:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:35:12.343 07:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:12.343 07:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:12.343 07:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:12.343 ************************************ 00:35:12.343 START TEST nvmf_host_management 00:35:12.343 ************************************ 00:35:12.343 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:35:12.343 * Looking for test storage... 00:35:12.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:12.343 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:12.343 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:35:12.343 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:12.343 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:12.343 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:12.343 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:12.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:12.344 --rc genhtml_branch_coverage=1 00:35:12.344 --rc genhtml_function_coverage=1 00:35:12.344 --rc genhtml_legend=1 00:35:12.344 --rc geninfo_all_blocks=1 00:35:12.344 --rc geninfo_unexecuted_blocks=1 00:35:12.344 00:35:12.344 ' 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:12.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:12.344 --rc genhtml_branch_coverage=1 00:35:12.344 --rc genhtml_function_coverage=1 00:35:12.344 --rc genhtml_legend=1 00:35:12.344 --rc geninfo_all_blocks=1 00:35:12.344 --rc geninfo_unexecuted_blocks=1 00:35:12.344 00:35:12.344 ' 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:12.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:12.344 --rc genhtml_branch_coverage=1 00:35:12.344 --rc genhtml_function_coverage=1 00:35:12.344 --rc genhtml_legend=1 00:35:12.344 --rc geninfo_all_blocks=1 00:35:12.344 --rc geninfo_unexecuted_blocks=1 00:35:12.344 00:35:12.344 ' 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:12.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:12.344 --rc genhtml_branch_coverage=1 00:35:12.344 --rc genhtml_function_coverage=1 00:35:12.344 --rc genhtml_legend=1 00:35:12.344 --rc geninfo_all_blocks=1 00:35:12.344 --rc geninfo_unexecuted_blocks=1 00:35:12.344 00:35:12.344 ' 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:35:12.344 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:20.488 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:20.488 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:35:20.488 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:20.488 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:20.488 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:20.488 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:20.488 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:20.488 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:35:20.488 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:20.488 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:35:20.488 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:35:20.488 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:35:20.488 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:35:20.488 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:35:20.488 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:35:20.488 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:20.488 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:20.488 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:20.488 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:20.488 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:20.488 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:20.488 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:20.488 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:20.488 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:20.488 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:20.488 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:20.488 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:20.489 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:20.489 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:20.489 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:20.489 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:20.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:20.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.544 ms 00:35:20.489 00:35:20.489 --- 10.0.0.2 ping statistics --- 00:35:20.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:20.489 rtt min/avg/max/mdev = 0.544/0.544/0.544/0.000 ms 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:20.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:20.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:35:20.489 00:35:20.489 --- 10.0.0.1 ping statistics --- 00:35:20.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:20.489 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2615316 00:35:20.489 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2615316 00:35:20.490 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:35:20.490 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2615316 ']' 00:35:20.490 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:20.490 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:20.490 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:20.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:20.490 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:20.490 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:20.490 [2024-11-27 07:30:30.842562] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:20.490 [2024-11-27 07:30:30.843704] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:35:20.490 [2024-11-27 07:30:30.843757] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:20.490 [2024-11-27 07:30:30.947468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:20.490 [2024-11-27 07:30:31.001487] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:20.490 [2024-11-27 07:30:31.001537] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:20.490 [2024-11-27 07:30:31.001546] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:20.490 [2024-11-27 07:30:31.001554] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:20.490 [2024-11-27 07:30:31.001560] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:20.490 [2024-11-27 07:30:31.003891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:20.490 [2024-11-27 07:30:31.004051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:20.490 [2024-11-27 07:30:31.004215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:20.490 [2024-11-27 07:30:31.004215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:20.490 [2024-11-27 07:30:31.091261] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:20.490 [2024-11-27 07:30:31.092336] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:20.490 [2024-11-27 07:30:31.092592] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:20.490 [2024-11-27 07:30:31.093220] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:20.490 [2024-11-27 07:30:31.093251] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:20.490 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:20.490 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:35:20.490 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:20.490 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:20.490 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:20.766 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:20.766 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:20.766 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.766 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:20.766 [2024-11-27 07:30:31.701063] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:20.766 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.766 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:35:20.766 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:20.766 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:20.766 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:35:20.766 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:35:20.766 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:35:20.766 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.766 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:20.766 Malloc0 00:35:20.766 [2024-11-27 07:30:31.805403] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:20.766 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.766 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:35:20.766 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:20.766 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:20.766 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2615603 00:35:20.766 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2615603 /var/tmp/bdevperf.sock 00:35:20.766 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2615603 ']' 00:35:20.766 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:20.767 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:20.767 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:20.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:20.767 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:35:20.767 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:35:20.767 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:20.767 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:20.767 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:35:20.767 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:35:20.767 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:20.767 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:20.767 { 00:35:20.767 "params": { 00:35:20.767 "name": "Nvme$subsystem", 00:35:20.767 "trtype": "$TEST_TRANSPORT", 00:35:20.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:20.767 "adrfam": "ipv4", 00:35:20.767 "trsvcid": "$NVMF_PORT", 00:35:20.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:20.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:20.767 "hdgst": ${hdgst:-false}, 00:35:20.767 "ddgst": ${ddgst:-false} 00:35:20.767 }, 00:35:20.767 "method": "bdev_nvme_attach_controller" 00:35:20.767 } 00:35:20.767 EOF 00:35:20.767 )") 00:35:20.767 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:35:20.767 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:35:20.767 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:35:20.767 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:20.767 "params": { 00:35:20.767 "name": "Nvme0", 00:35:20.767 "trtype": "tcp", 00:35:20.767 "traddr": "10.0.0.2", 00:35:20.767 "adrfam": "ipv4", 00:35:20.767 "trsvcid": "4420", 00:35:20.767 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:20.767 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:20.767 "hdgst": false, 00:35:20.767 "ddgst": false 00:35:20.767 }, 00:35:20.767 "method": "bdev_nvme_attach_controller" 00:35:20.767 }' 00:35:20.767 [2024-11-27 07:30:31.916058] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:35:20.767 [2024-11-27 07:30:31.916135] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2615603 ] 00:35:21.029 [2024-11-27 07:30:32.010202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:21.029 [2024-11-27 07:30:32.062776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:21.291 Running I/O for 10 seconds... 00:35:21.553 07:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:21.553 07:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:35:21.553 07:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:35:21.553 07:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.553 07:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:21.817 07:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.817 07:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:21.817 07:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:35:21.817 07:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:35:21.817 07:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:35:21.817 07:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:35:21.817 07:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:35:21.817 07:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:35:21.817 07:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:35:21.817 07:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:35:21.817 07:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:35:21.817 07:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.817 07:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:21.817 07:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.817 07:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:35:21.817 07:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:35:21.817 07:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:35:21.817 07:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:35:21.817 07:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:35:21.817 07:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:35:21.817 07:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.817 07:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:21.817 [2024-11-27 07:30:32.812823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22980e0 is same with the state(6) to be set 00:35:21.817 [2024-11-27 07:30:32.813236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.817 [2024-11-27 07:30:32.813286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.817 [2024-11-27 07:30:32.813306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.817 [2024-11-27 07:30:32.813315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.817 [2024-11-27 07:30:32.813325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.817 [2024-11-27 07:30:32.813333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.817 [2024-11-27 07:30:32.813343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.817 [2024-11-27 07:30:32.813351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.817 [2024-11-27 07:30:32.813360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.817 [2024-11-27 07:30:32.813368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.817 [2024-11-27 07:30:32.813378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.817 [2024-11-27 07:30:32.813385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.817 [2024-11-27 07:30:32.813395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.817 [2024-11-27 07:30:32.813403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.817 [2024-11-27 07:30:32.813412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.817 [2024-11-27 07:30:32.813420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.817 [2024-11-27 07:30:32.813429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.817 [2024-11-27 07:30:32.813436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.817 [2024-11-27 07:30:32.813454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.817 [2024-11-27 07:30:32.813462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.817 [2024-11-27 07:30:32.813471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.817 [2024-11-27 07:30:32.813478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.817 [2024-11-27 07:30:32.813488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.817 [2024-11-27 07:30:32.813496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.817 [2024-11-27 07:30:32.813505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.817 [2024-11-27 07:30:32.813513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.817 [2024-11-27 07:30:32.813522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.817 [2024-11-27 07:30:32.813529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.817 [2024-11-27 07:30:32.813539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.817 [2024-11-27 07:30:32.813546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.817 [2024-11-27 07:30:32.813555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.817 [2024-11-27 07:30:32.813563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.817 [2024-11-27 07:30:32.813572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.817 [2024-11-27 07:30:32.813580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.817 [2024-11-27 07:30:32.813589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.817 [2024-11-27 07:30:32.813597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.817 [2024-11-27 07:30:32.813606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.817 [2024-11-27 07:30:32.813614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.817 [2024-11-27 07:30:32.813623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.817 [2024-11-27 07:30:32.813631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.817 [2024-11-27 07:30:32.813641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.817 [2024-11-27 07:30:32.813649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.817 [2024-11-27 07:30:32.813659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.817 [2024-11-27 07:30:32.813669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.817 [2024-11-27 07:30:32.813678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.817 [2024-11-27 07:30:32.813686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.817 [2024-11-27 07:30:32.813695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.817 [2024-11-27 07:30:32.813703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.817 [2024-11-27 07:30:32.813712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.817 [2024-11-27 07:30:32.813720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.817 [2024-11-27 07:30:32.813730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.817 [2024-11-27 07:30:32.813738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.813748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.818 [2024-11-27 07:30:32.813756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.813767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.818 [2024-11-27 07:30:32.813775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.813785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.818 [2024-11-27 07:30:32.813792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.813802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.818 [2024-11-27 07:30:32.813810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.813820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.818 [2024-11-27 07:30:32.813828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.813837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.818 [2024-11-27 07:30:32.813844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.813853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.818 [2024-11-27 07:30:32.813863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.813873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.818 [2024-11-27 07:30:32.813880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.813892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.818 [2024-11-27 07:30:32.813899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.813909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.818 [2024-11-27 07:30:32.813916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.813925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.818 [2024-11-27 07:30:32.813933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.813942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.818 [2024-11-27 07:30:32.813949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.813958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.818 [2024-11-27 07:30:32.813966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.813975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.818 [2024-11-27 07:30:32.813983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.813992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.818 [2024-11-27 07:30:32.814000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.814009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.818 [2024-11-27 07:30:32.814017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.814027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.818 [2024-11-27 07:30:32.814034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.814043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.818 [2024-11-27 07:30:32.814050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.814059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.818 [2024-11-27 07:30:32.814066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.814076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.818 [2024-11-27 07:30:32.814084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.814093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.818 [2024-11-27 07:30:32.814102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.814111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.818 [2024-11-27 07:30:32.814118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.814128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.818 [2024-11-27 07:30:32.814135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.814145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.818 [2024-11-27 07:30:32.814152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.814171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.818 [2024-11-27 07:30:32.814179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.814188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.818 [2024-11-27 07:30:32.814195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.814204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.818 [2024-11-27 07:30:32.814212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.814221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.818 [2024-11-27 07:30:32.814229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.814238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.818 [2024-11-27 07:30:32.814245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.814256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.818 [2024-11-27 07:30:32.814264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.814274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.818 [2024-11-27 07:30:32.814281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.814290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.818 [2024-11-27 07:30:32.814298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.814308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.818 [2024-11-27 07:30:32.814316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.814328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.818 [2024-11-27 07:30:32.814335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.814344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.818 [2024-11-27 07:30:32.814352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.814361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.818 [2024-11-27 07:30:32.814369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.814378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.818 [2024-11-27 07:30:32.814385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.814394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.818 [2024-11-27 07:30:32.814402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.814539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:21.818 [2024-11-27 07:30:32.814552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.818 [2024-11-27 07:30:32.814561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:21.819 [2024-11-27 07:30:32.814569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.819 [2024-11-27 07:30:32.814578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:21.819 [2024-11-27 07:30:32.814586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.819 [2024-11-27 07:30:32.814594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:21.819 [2024-11-27 07:30:32.814602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.819 [2024-11-27 07:30:32.814610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e0010 is same with the state(6) to be set 00:35:21.819 [2024-11-27 07:30:32.815817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:21.819 07:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.819 07:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:35:21.819 07:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.819 task offset: 95104 on job bdev=Nvme0n1 fails 00:35:21.819 00:35:21.819 Latency(us) 00:35:21.819 [2024-11-27T06:30:33.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:21.819 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:21.819 Job: Nvme0n1 ended in about 0.54 seconds with error 00:35:21.819 Verification LBA range: start 0x0 length 0x400 00:35:21.819 Nvme0n1 : 0.54 1294.03 80.88 117.64 0.00 44219.51 1966.08 38229.33 00:35:21.819 [2024-11-27T06:30:33.024Z] =================================================================================================================== 00:35:21.819 [2024-11-27T06:30:33.024Z] Total : 1294.03 80.88 117.64 0.00 44219.51 1966.08 38229.33 00:35:21.819 07:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:21.819 [2024-11-27 07:30:32.818293] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:35:21.819 [2024-11-27 07:30:32.818344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e0010 (9): Bad file descriptor 00:35:21.819 [2024-11-27 07:30:32.819827] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:35:21.819 [2024-11-27 07:30:32.819920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:35:21.819 [2024-11-27 07:30:32.819950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.819 [2024-11-27 07:30:32.819968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:35:21.819 [2024-11-27 07:30:32.819978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:35:21.819 [2024-11-27 07:30:32.819986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.819 [2024-11-27 07:30:32.819994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9e0010 00:35:21.819 [2024-11-27 07:30:32.820016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e0010 (9): Bad file descriptor 00:35:21.819 [2024-11-27 07:30:32.820029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:21.819 [2024-11-27 07:30:32.820037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:21.819 [2024-11-27 07:30:32.820048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:21.819 [2024-11-27 07:30:32.820058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:21.819 07:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.819 07:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:35:22.763 07:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2615603 00:35:22.763 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2615603) - No such process 00:35:22.763 07:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:35:22.763 07:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:35:22.763 07:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:35:22.763 07:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:35:22.763 07:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:35:22.763 07:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:35:22.763 07:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:22.763 07:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:22.763 { 00:35:22.763 "params": { 00:35:22.763 "name": "Nvme$subsystem", 00:35:22.763 "trtype": "$TEST_TRANSPORT", 00:35:22.763 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:22.763 "adrfam": "ipv4", 00:35:22.763 "trsvcid": "$NVMF_PORT", 00:35:22.763 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:22.763 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:22.763 "hdgst": ${hdgst:-false}, 00:35:22.763 "ddgst": ${ddgst:-false} 00:35:22.763 }, 00:35:22.763 "method": "bdev_nvme_attach_controller" 00:35:22.763 } 00:35:22.763 EOF 00:35:22.763 )") 00:35:22.763 07:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:35:22.763 07:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:35:22.763 07:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:35:22.763 07:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:22.763 "params": { 00:35:22.763 "name": "Nvme0", 00:35:22.763 "trtype": "tcp", 00:35:22.763 "traddr": "10.0.0.2", 00:35:22.763 "adrfam": "ipv4", 00:35:22.763 "trsvcid": "4420", 00:35:22.763 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:22.763 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:22.763 "hdgst": false, 00:35:22.763 "ddgst": false 00:35:22.763 }, 00:35:22.763 "method": "bdev_nvme_attach_controller" 00:35:22.763 }' 00:35:22.763 [2024-11-27 07:30:33.891004] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:35:22.763 [2024-11-27 07:30:33.891084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2615956 ] 00:35:23.025 [2024-11-27 07:30:33.983358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:23.025 [2024-11-27 07:30:34.034963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:23.025 Running I/O for 1 seconds... 00:35:24.413 1681.00 IOPS, 105.06 MiB/s 00:35:24.413 Latency(us) 00:35:24.413 [2024-11-27T06:30:35.618Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:24.413 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:24.413 Verification LBA range: start 0x0 length 0x400 00:35:24.413 Nvme0n1 : 1.01 1730.99 108.19 0.00 0.00 36225.77 2839.89 37573.97 00:35:24.413 [2024-11-27T06:30:35.618Z] =================================================================================================================== 00:35:24.413 [2024-11-27T06:30:35.618Z] Total : 1730.99 108.19 0.00 0.00 36225.77 2839.89 37573.97 00:35:24.413 07:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:35:24.413 07:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:35:24.413 07:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:35:24.413 07:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:35:24.413 07:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:35:24.413 07:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:24.413 07:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:35:24.413 07:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:24.413 07:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:35:24.413 07:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:24.413 07:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:24.413 rmmod nvme_tcp 00:35:24.413 rmmod nvme_fabrics 00:35:24.413 rmmod nvme_keyring 00:35:24.413 07:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:24.413 07:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:35:24.413 07:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:35:24.413 07:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2615316 ']' 00:35:24.413 07:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2615316 00:35:24.413 07:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2615316 ']' 00:35:24.413 07:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2615316 00:35:24.413 07:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:35:24.413 07:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:24.413 07:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2615316 00:35:24.413 07:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:24.413 07:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:24.413 07:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2615316' 00:35:24.413 killing process with pid 2615316 00:35:24.413 07:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2615316 00:35:24.413 07:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2615316 00:35:24.413 [2024-11-27 07:30:35.568600] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:35:24.413 07:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:24.413 07:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:24.413 07:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:24.413 07:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:35:24.413 07:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:35:24.413 07:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:24.413 07:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:35:24.413 07:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:24.413 07:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:24.413 07:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:24.413 07:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:24.413 07:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:26.964 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:26.964 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:35:26.964 00:35:26.964 real 0m14.645s 00:35:26.964 user 0m19.131s 00:35:26.964 sys 0m7.442s 00:35:26.964 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:26.964 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:26.964 ************************************ 00:35:26.964 END TEST nvmf_host_management 00:35:26.964 ************************************ 00:35:26.964 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:35:26.964 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:26.964 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:26.964 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:26.964 ************************************ 00:35:26.964 START TEST nvmf_lvol 00:35:26.964 ************************************ 00:35:26.964 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:35:26.964 * Looking for test storage... 00:35:26.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:26.964 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:26.964 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:35:26.964 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:26.964 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:26.964 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:26.964 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:26.964 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:26.964 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:35:26.964 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:35:26.964 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:35:26.964 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:35:26.964 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:35:26.964 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:35:26.964 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:35:26.964 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:26.964 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:35:26.964 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:35:26.964 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:26.964 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:26.964 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:35:26.964 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:35:26.964 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:26.964 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:35:26.964 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:35:26.964 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:35:26.965 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:35:26.965 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:26.965 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:35:26.965 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:35:26.965 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:26.965 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:26.965 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:35:26.965 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:26.965 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:26.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:26.965 --rc genhtml_branch_coverage=1 00:35:26.965 --rc genhtml_function_coverage=1 00:35:26.965 --rc genhtml_legend=1 00:35:26.965 --rc geninfo_all_blocks=1 00:35:26.965 --rc geninfo_unexecuted_blocks=1 00:35:26.965 00:35:26.965 ' 00:35:26.965 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:26.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:26.965 --rc genhtml_branch_coverage=1 00:35:26.965 --rc genhtml_function_coverage=1 00:35:26.965 --rc genhtml_legend=1 00:35:26.965 --rc geninfo_all_blocks=1 00:35:26.965 --rc geninfo_unexecuted_blocks=1 00:35:26.965 00:35:26.965 ' 00:35:26.965 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:26.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:26.965 --rc genhtml_branch_coverage=1 00:35:26.965 --rc genhtml_function_coverage=1 00:35:26.965 --rc genhtml_legend=1 00:35:26.965 --rc geninfo_all_blocks=1 00:35:26.965 --rc geninfo_unexecuted_blocks=1 00:35:26.965 00:35:26.965 ' 00:35:26.965 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:26.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:26.965 --rc genhtml_branch_coverage=1 00:35:26.965 --rc genhtml_function_coverage=1 00:35:26.965 --rc genhtml_legend=1 00:35:26.965 --rc geninfo_all_blocks=1 00:35:26.965 --rc geninfo_unexecuted_blocks=1 00:35:26.965 00:35:26.965 ' 00:35:26.965 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:26.965 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:35:26.965 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:26.965 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:26.965 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:26.965 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:26.965 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:26.965 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:26.965 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:26.965 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:26.965 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:26.965 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:26.965 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:26.965 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:26.965 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:26.965 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:26.965 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:26.965 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:26.965 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:26.965 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:35:26.965 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:26.965 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:26.965 07:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:26.965 07:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.965 07:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.965 07:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.965 07:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:35:26.965 07:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.965 07:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:35:26.965 07:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:26.965 07:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:26.965 07:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:26.965 07:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:26.965 07:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:26.965 07:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:26.965 07:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:26.965 07:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:26.966 07:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:26.966 07:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:26.966 07:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:26.966 07:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:26.966 07:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:35:26.966 07:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:35:26.966 07:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:26.966 07:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:35:26.966 07:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:26.966 07:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:26.966 07:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:26.966 07:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:26.966 07:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:26.966 07:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:26.966 07:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:26.966 07:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:26.966 07:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:26.966 07:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:26.966 07:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:35:26.966 07:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:35.108 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:35.108 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:35.108 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:35.108 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:35.108 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:35.108 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.715 ms 00:35:35.108 00:35:35.108 --- 10.0.0.2 ping statistics --- 00:35:35.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:35.108 rtt min/avg/max/mdev = 0.715/0.715/0.715/0.000 ms 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:35.108 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:35.108 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:35:35.108 00:35:35.108 --- 10.0.0.1 ping statistics --- 00:35:35.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:35.108 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2620445 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2620445 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2620445 ']' 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:35.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:35.108 07:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:35:35.108 [2024-11-27 07:30:45.654940] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:35.108 [2024-11-27 07:30:45.656058] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:35:35.108 [2024-11-27 07:30:45.656108] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:35.108 [2024-11-27 07:30:45.757048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:35.108 [2024-11-27 07:30:45.809504] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:35.108 [2024-11-27 07:30:45.809558] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:35.108 [2024-11-27 07:30:45.809566] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:35.108 [2024-11-27 07:30:45.809573] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:35.108 [2024-11-27 07:30:45.809579] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:35.108 [2024-11-27 07:30:45.811462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:35.108 [2024-11-27 07:30:45.811620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:35.109 [2024-11-27 07:30:45.811621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:35.109 [2024-11-27 07:30:45.889497] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:35.109 [2024-11-27 07:30:45.890493] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:35.109 [2024-11-27 07:30:45.890561] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:35.109 [2024-11-27 07:30:45.890798] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:35.399 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:35.399 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:35:35.399 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:35.399 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:35.399 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:35:35.399 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:35.399 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:35.659 [2024-11-27 07:30:46.668570] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:35.659 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:35.921 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:35:35.921 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:35.921 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:35:35.921 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:35:36.182 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:35:36.443 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=051c4c06-55e7-45ae-89fe-7e21ae952d02 00:35:36.443 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 051c4c06-55e7-45ae-89fe-7e21ae952d02 lvol 20 00:35:36.704 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3fb81301-cf0f-4a5c-9dbe-c2089b61aaa3 00:35:36.704 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:35:36.704 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3fb81301-cf0f-4a5c-9dbe-c2089b61aaa3 00:35:36.965 07:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:37.225 [2024-11-27 07:30:48.244441] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:37.226 07:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:37.486 07:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2620993 00:35:37.486 07:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:35:37.486 07:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:35:38.429 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3fb81301-cf0f-4a5c-9dbe-c2089b61aaa3 MY_SNAPSHOT 00:35:38.689 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=b5a5b02f-45fe-42cf-9957-97736616c192 00:35:38.690 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3fb81301-cf0f-4a5c-9dbe-c2089b61aaa3 30 00:35:38.950 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone b5a5b02f-45fe-42cf-9957-97736616c192 MY_CLONE 00:35:39.210 07:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b2d4e8a3-3f29-4c6a-9386-083ba4266ae2 00:35:39.210 07:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate b2d4e8a3-3f29-4c6a-9386-083ba4266ae2 00:35:39.470 07:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2620993 00:35:49.582 Initializing NVMe Controllers 00:35:49.582 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:35:49.582 Controller IO queue size 128, less than required. 00:35:49.582 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:49.582 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:35:49.582 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:35:49.582 Initialization complete. Launching workers. 00:35:49.582 ======================================================== 00:35:49.582 Latency(us) 00:35:49.582 Device Information : IOPS MiB/s Average min max 00:35:49.582 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15408.90 60.19 8306.88 1898.13 67027.13 00:35:49.582 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15685.00 61.27 8161.60 2505.42 92107.74 00:35:49.582 ======================================================== 00:35:49.582 Total : 31093.90 121.46 8233.60 1898.13 92107.74 00:35:49.582 00:35:49.582 07:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:49.582 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3fb81301-cf0f-4a5c-9dbe-c2089b61aaa3 00:35:49.582 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 051c4c06-55e7-45ae-89fe-7e21ae952d02 00:35:49.582 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:35:49.582 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:35:49.582 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:35:49.582 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:49.582 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:35:49.582 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:49.582 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:35:49.582 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:49.582 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:49.582 rmmod nvme_tcp 00:35:49.582 rmmod nvme_fabrics 00:35:49.582 rmmod nvme_keyring 00:35:49.582 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:49.582 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:35:49.582 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:35:49.582 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2620445 ']' 00:35:49.582 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2620445 00:35:49.583 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2620445 ']' 00:35:49.583 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2620445 00:35:49.583 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:35:49.583 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:49.583 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2620445 00:35:49.583 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:49.583 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:49.583 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2620445' 00:35:49.583 killing process with pid 2620445 00:35:49.583 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2620445 00:35:49.583 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2620445 00:35:49.583 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:49.583 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:49.583 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:49.583 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:35:49.583 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:35:49.583 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:49.583 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:35:49.583 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:49.583 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:49.583 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:49.583 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:49.583 07:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:50.962 07:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:50.962 00:35:50.962 real 0m24.085s 00:35:50.962 user 0m56.251s 00:35:50.962 sys 0m11.120s 00:35:50.962 07:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:50.962 07:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:35:50.962 ************************************ 00:35:50.962 END TEST nvmf_lvol 00:35:50.962 ************************************ 00:35:50.962 07:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:35:50.962 07:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:50.962 07:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:50.962 07:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:50.962 ************************************ 00:35:50.962 START TEST nvmf_lvs_grow 00:35:50.962 ************************************ 00:35:50.962 07:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:35:50.962 * Looking for test storage... 00:35:50.962 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:50.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:50.963 --rc genhtml_branch_coverage=1 00:35:50.963 --rc genhtml_function_coverage=1 00:35:50.963 --rc genhtml_legend=1 00:35:50.963 --rc geninfo_all_blocks=1 00:35:50.963 --rc geninfo_unexecuted_blocks=1 00:35:50.963 00:35:50.963 ' 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:50.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:50.963 --rc genhtml_branch_coverage=1 00:35:50.963 --rc genhtml_function_coverage=1 00:35:50.963 --rc genhtml_legend=1 00:35:50.963 --rc geninfo_all_blocks=1 00:35:50.963 --rc geninfo_unexecuted_blocks=1 00:35:50.963 00:35:50.963 ' 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:50.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:50.963 --rc genhtml_branch_coverage=1 00:35:50.963 --rc genhtml_function_coverage=1 00:35:50.963 --rc genhtml_legend=1 00:35:50.963 --rc geninfo_all_blocks=1 00:35:50.963 --rc geninfo_unexecuted_blocks=1 00:35:50.963 00:35:50.963 ' 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:50.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:50.963 --rc genhtml_branch_coverage=1 00:35:50.963 --rc genhtml_function_coverage=1 00:35:50.963 --rc genhtml_legend=1 00:35:50.963 --rc geninfo_all_blocks=1 00:35:50.963 --rc geninfo_unexecuted_blocks=1 00:35:50.963 00:35:50.963 ' 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:50.963 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:51.223 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:35:51.223 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:51.223 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:51.223 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:51.224 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.224 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.224 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.224 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:35:51.224 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.224 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:35:51.224 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:51.224 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:51.224 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:51.224 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:51.224 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:51.224 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:51.224 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:51.224 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:51.224 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:51.224 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:51.224 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:51.224 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:51.224 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:35:51.224 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:51.224 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:51.224 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:51.224 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:51.224 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:51.224 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:51.224 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:51.224 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:51.224 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:51.224 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:51.224 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:35:51.224 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:59.367 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:59.367 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:35:59.367 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:59.367 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:59.367 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:59.367 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:59.367 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:59.367 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:35:59.367 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:59.367 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:35:59.367 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:35:59.367 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:35:59.367 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:35:59.367 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:35:59.367 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:35:59.367 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:59.367 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:59.367 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:59.367 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:59.367 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:59.367 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:59.367 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:59.367 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:59.367 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:59.367 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:59.367 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:59.367 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:59.367 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:59.367 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:59.367 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:59.367 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:59.367 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:59.367 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:59.368 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:59.368 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:59.368 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:59.368 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:59.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:59.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:35:59.368 00:35:59.368 --- 10.0.0.2 ping statistics --- 00:35:59.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:59.368 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:59.368 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:59.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:35:59.368 00:35:59.368 --- 10.0.0.1 ping statistics --- 00:35:59.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:59.368 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2627336 00:35:59.368 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2627336 00:35:59.369 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:35:59.369 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2627336 ']' 00:35:59.369 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:59.369 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:59.369 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:59.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:59.369 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:59.369 07:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:59.369 [2024-11-27 07:31:09.817251] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:59.369 [2024-11-27 07:31:09.818405] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:35:59.369 [2024-11-27 07:31:09.818460] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:59.369 [2024-11-27 07:31:09.918591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:59.369 [2024-11-27 07:31:09.969963] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:59.369 [2024-11-27 07:31:09.970018] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:59.369 [2024-11-27 07:31:09.970027] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:59.369 [2024-11-27 07:31:09.970034] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:59.369 [2024-11-27 07:31:09.970040] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:59.369 [2024-11-27 07:31:09.970850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:59.369 [2024-11-27 07:31:10.052290] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:59.369 [2024-11-27 07:31:10.052562] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:59.630 07:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:59.630 07:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:35:59.630 07:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:59.630 07:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:59.630 07:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:59.630 07:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:59.630 07:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:59.892 [2024-11-27 07:31:10.835744] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:59.892 07:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:35:59.892 07:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:59.892 07:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:59.892 07:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:59.892 ************************************ 00:35:59.892 START TEST lvs_grow_clean 00:35:59.892 ************************************ 00:35:59.892 07:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:35:59.892 07:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:35:59.892 07:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:35:59.892 07:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:35:59.892 07:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:35:59.892 07:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:35:59.892 07:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:35:59.892 07:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:59.892 07:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:59.892 07:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:36:00.154 07:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:36:00.154 07:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:36:00.154 07:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=40fc8d9f-4ca7-4747-ab8c-4de4fab8a80e 00:36:00.154 07:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40fc8d9f-4ca7-4747-ab8c-4de4fab8a80e 00:36:00.154 07:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:36:00.414 07:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:36:00.414 07:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:36:00.414 07:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 40fc8d9f-4ca7-4747-ab8c-4de4fab8a80e lvol 150 00:36:00.674 07:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a9a9a862-36a7-4cc2-a5ed-5343508276d6 00:36:00.674 07:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:00.674 07:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:36:00.674 [2024-11-27 07:31:11.855414] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:36:00.674 [2024-11-27 07:31:11.855576] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:36:00.674 true 00:36:00.674 07:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40fc8d9f-4ca7-4747-ab8c-4de4fab8a80e 00:36:00.934 07:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:36:00.934 07:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:36:00.934 07:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:01.196 07:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a9a9a862-36a7-4cc2-a5ed-5343508276d6 00:36:01.458 07:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:01.458 [2024-11-27 07:31:12.588097] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:01.458 07:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:01.720 07:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2627870 00:36:01.720 07:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:01.720 07:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:36:01.720 07:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2627870 /var/tmp/bdevperf.sock 00:36:01.720 07:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2627870 ']' 00:36:01.720 07:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:01.720 07:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:01.720 07:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:01.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:01.720 07:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:01.720 07:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:36:01.720 [2024-11-27 07:31:12.845932] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:36:01.720 [2024-11-27 07:31:12.846004] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2627870 ] 00:36:01.982 [2024-11-27 07:31:12.939044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:01.982 [2024-11-27 07:31:12.991662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:02.556 07:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:02.556 07:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:36:02.556 07:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:36:03.129 Nvme0n1 00:36:03.129 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:36:03.129 [ 00:36:03.129 { 00:36:03.129 "name": "Nvme0n1", 00:36:03.129 "aliases": [ 00:36:03.129 "a9a9a862-36a7-4cc2-a5ed-5343508276d6" 00:36:03.129 ], 00:36:03.129 "product_name": "NVMe disk", 00:36:03.129 "block_size": 4096, 00:36:03.129 "num_blocks": 38912, 00:36:03.129 "uuid": "a9a9a862-36a7-4cc2-a5ed-5343508276d6", 00:36:03.129 "numa_id": 0, 00:36:03.129 "assigned_rate_limits": { 00:36:03.129 "rw_ios_per_sec": 0, 00:36:03.129 "rw_mbytes_per_sec": 0, 00:36:03.129 "r_mbytes_per_sec": 0, 00:36:03.129 "w_mbytes_per_sec": 0 00:36:03.129 }, 00:36:03.129 "claimed": false, 00:36:03.129 "zoned": false, 00:36:03.129 "supported_io_types": { 00:36:03.129 "read": true, 00:36:03.129 "write": true, 00:36:03.129 "unmap": true, 00:36:03.129 "flush": true, 00:36:03.129 "reset": true, 00:36:03.129 "nvme_admin": true, 00:36:03.129 "nvme_io": true, 00:36:03.129 "nvme_io_md": false, 00:36:03.129 "write_zeroes": true, 00:36:03.129 "zcopy": false, 00:36:03.129 "get_zone_info": false, 00:36:03.129 "zone_management": false, 00:36:03.129 "zone_append": false, 00:36:03.129 "compare": true, 00:36:03.129 "compare_and_write": true, 00:36:03.129 "abort": true, 00:36:03.129 "seek_hole": false, 00:36:03.129 "seek_data": false, 00:36:03.129 "copy": true, 00:36:03.129 "nvme_iov_md": false 00:36:03.129 }, 00:36:03.129 "memory_domains": [ 00:36:03.129 { 00:36:03.129 "dma_device_id": "system", 00:36:03.129 "dma_device_type": 1 00:36:03.129 } 00:36:03.129 ], 00:36:03.129 "driver_specific": { 00:36:03.129 "nvme": [ 00:36:03.129 { 00:36:03.129 "trid": { 00:36:03.129 "trtype": "TCP", 00:36:03.129 "adrfam": "IPv4", 00:36:03.129 "traddr": "10.0.0.2", 00:36:03.129 "trsvcid": "4420", 00:36:03.129 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:36:03.129 }, 00:36:03.129 "ctrlr_data": { 00:36:03.129 "cntlid": 1, 00:36:03.129 "vendor_id": "0x8086", 00:36:03.129 "model_number": "SPDK bdev Controller", 00:36:03.129 "serial_number": "SPDK0", 00:36:03.129 "firmware_revision": "25.01", 00:36:03.129 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:03.129 "oacs": { 00:36:03.129 "security": 0, 00:36:03.129 "format": 0, 00:36:03.129 "firmware": 0, 00:36:03.129 "ns_manage": 0 00:36:03.129 }, 00:36:03.129 "multi_ctrlr": true, 00:36:03.129 "ana_reporting": false 00:36:03.129 }, 00:36:03.129 "vs": { 00:36:03.129 "nvme_version": "1.3" 00:36:03.129 }, 00:36:03.129 "ns_data": { 00:36:03.129 "id": 1, 00:36:03.129 "can_share": true 00:36:03.129 } 00:36:03.129 } 00:36:03.129 ], 00:36:03.129 "mp_policy": "active_passive" 00:36:03.129 } 00:36:03.129 } 00:36:03.129 ] 00:36:03.129 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2628071 00:36:03.129 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:36:03.129 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:36:03.392 Running I/O for 10 seconds... 00:36:04.336 Latency(us) 00:36:04.336 [2024-11-27T06:31:15.541Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:04.336 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:04.336 Nvme0n1 : 1.00 16955.00 66.23 0.00 0.00 0.00 0.00 0.00 00:36:04.336 [2024-11-27T06:31:15.541Z] =================================================================================================================== 00:36:04.336 [2024-11-27T06:31:15.541Z] Total : 16955.00 66.23 0.00 0.00 0.00 0.00 0.00 00:36:04.336 00:36:05.280 07:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 40fc8d9f-4ca7-4747-ab8c-4de4fab8a80e 00:36:05.280 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:05.280 Nvme0n1 : 2.00 17233.00 67.32 0.00 0.00 0.00 0.00 0.00 00:36:05.280 [2024-11-27T06:31:16.485Z] =================================================================================================================== 00:36:05.280 [2024-11-27T06:31:16.485Z] Total : 17233.00 67.32 0.00 0.00 0.00 0.00 0.00 00:36:05.280 00:36:05.280 true 00:36:05.280 07:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40fc8d9f-4ca7-4747-ab8c-4de4fab8a80e 00:36:05.280 07:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:36:05.541 07:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:36:05.541 07:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:36:05.541 07:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2628071 00:36:06.483 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:06.483 Nvme0n1 : 3.00 17500.00 68.36 0.00 0.00 0.00 0.00 0.00 00:36:06.483 [2024-11-27T06:31:17.688Z] =================================================================================================================== 00:36:06.483 [2024-11-27T06:31:17.688Z] Total : 17500.00 68.36 0.00 0.00 0.00 0.00 0.00 00:36:06.483 00:36:07.424 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:07.424 Nvme0n1 : 4.00 17951.00 70.12 0.00 0.00 0.00 0.00 0.00 00:36:07.424 [2024-11-27T06:31:18.629Z] =================================================================================================================== 00:36:07.424 [2024-11-27T06:31:18.629Z] Total : 17951.00 70.12 0.00 0.00 0.00 0.00 0.00 00:36:07.424 00:36:08.363 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:08.363 Nvme0n1 : 5.00 19491.60 76.14 0.00 0.00 0.00 0.00 0.00 00:36:08.363 [2024-11-27T06:31:19.568Z] =================================================================================================================== 00:36:08.363 [2024-11-27T06:31:19.568Z] Total : 19491.60 76.14 0.00 0.00 0.00 0.00 0.00 00:36:08.363 00:36:09.304 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:09.304 Nvme0n1 : 6.00 20518.83 80.15 0.00 0.00 0.00 0.00 0.00 00:36:09.304 [2024-11-27T06:31:20.509Z] =================================================================================================================== 00:36:09.304 [2024-11-27T06:31:20.509Z] Total : 20518.83 80.15 0.00 0.00 0.00 0.00 0.00 00:36:09.304 00:36:10.245 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:10.245 Nvme0n1 : 7.00 21261.57 83.05 0.00 0.00 0.00 0.00 0.00 00:36:10.245 [2024-11-27T06:31:21.450Z] =================================================================================================================== 00:36:10.245 [2024-11-27T06:31:21.450Z] Total : 21261.57 83.05 0.00 0.00 0.00 0.00 0.00 00:36:10.245 00:36:11.187 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:11.187 Nvme0n1 : 8.00 21824.62 85.25 0.00 0.00 0.00 0.00 0.00 00:36:11.187 [2024-11-27T06:31:22.392Z] =================================================================================================================== 00:36:11.187 [2024-11-27T06:31:22.392Z] Total : 21824.62 85.25 0.00 0.00 0.00 0.00 0.00 00:36:11.187 00:36:12.570 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:12.570 Nvme0n1 : 9.00 22264.22 86.97 0.00 0.00 0.00 0.00 0.00 00:36:12.570 [2024-11-27T06:31:23.775Z] =================================================================================================================== 00:36:12.570 [2024-11-27T06:31:23.775Z] Total : 22264.22 86.97 0.00 0.00 0.00 0.00 0.00 00:36:12.570 00:36:13.511 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:13.511 Nvme0n1 : 10.00 22615.90 88.34 0.00 0.00 0.00 0.00 0.00 00:36:13.511 [2024-11-27T06:31:24.716Z] =================================================================================================================== 00:36:13.511 [2024-11-27T06:31:24.716Z] Total : 22615.90 88.34 0.00 0.00 0.00 0.00 0.00 00:36:13.511 00:36:13.511 00:36:13.511 Latency(us) 00:36:13.511 [2024-11-27T06:31:24.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:13.511 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:13.511 Nvme0n1 : 10.01 22615.85 88.34 0.00 0.00 5657.04 2880.85 31238.83 00:36:13.511 [2024-11-27T06:31:24.716Z] =================================================================================================================== 00:36:13.511 [2024-11-27T06:31:24.716Z] Total : 22615.85 88.34 0.00 0.00 5657.04 2880.85 31238.83 00:36:13.511 { 00:36:13.511 "results": [ 00:36:13.511 { 00:36:13.511 "job": "Nvme0n1", 00:36:13.511 "core_mask": "0x2", 00:36:13.511 "workload": "randwrite", 00:36:13.511 "status": "finished", 00:36:13.511 "queue_depth": 128, 00:36:13.511 "io_size": 4096, 00:36:13.511 "runtime": 10.00568, 00:36:13.511 "iops": 22615.854194817344, 00:36:13.511 "mibps": 88.34318044850525, 00:36:13.511 "io_failed": 0, 00:36:13.511 "io_timeout": 0, 00:36:13.511 "avg_latency_us": 5657.042152664536, 00:36:13.511 "min_latency_us": 2880.8533333333335, 00:36:13.511 "max_latency_us": 31238.826666666668 00:36:13.511 } 00:36:13.511 ], 00:36:13.511 "core_count": 1 00:36:13.511 } 00:36:13.511 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2627870 00:36:13.511 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2627870 ']' 00:36:13.511 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2627870 00:36:13.511 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:36:13.511 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:13.511 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2627870 00:36:13.511 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:13.511 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:13.511 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2627870' 00:36:13.511 killing process with pid 2627870 00:36:13.511 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2627870 00:36:13.511 Received shutdown signal, test time was about 10.000000 seconds 00:36:13.511 00:36:13.511 Latency(us) 00:36:13.511 [2024-11-27T06:31:24.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:13.511 [2024-11-27T06:31:24.716Z] =================================================================================================================== 00:36:13.511 [2024-11-27T06:31:24.716Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:13.511 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2627870 00:36:13.511 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:13.771 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:13.771 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40fc8d9f-4ca7-4747-ab8c-4de4fab8a80e 00:36:13.771 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:36:14.032 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:36:14.032 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:36:14.032 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:36:14.293 [2024-11-27 07:31:25.243483] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:36:14.293 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40fc8d9f-4ca7-4747-ab8c-4de4fab8a80e 00:36:14.293 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:36:14.293 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40fc8d9f-4ca7-4747-ab8c-4de4fab8a80e 00:36:14.293 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:14.293 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:14.293 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:14.293 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:14.293 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:14.293 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:14.293 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:14.293 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:36:14.293 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40fc8d9f-4ca7-4747-ab8c-4de4fab8a80e 00:36:14.293 request: 00:36:14.293 { 00:36:14.293 "uuid": "40fc8d9f-4ca7-4747-ab8c-4de4fab8a80e", 00:36:14.293 "method": "bdev_lvol_get_lvstores", 00:36:14.293 "req_id": 1 00:36:14.293 } 00:36:14.293 Got JSON-RPC error response 00:36:14.293 response: 00:36:14.293 { 00:36:14.293 "code": -19, 00:36:14.293 "message": "No such device" 00:36:14.293 } 00:36:14.293 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:36:14.293 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:14.293 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:14.293 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:14.293 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:36:14.553 aio_bdev 00:36:14.553 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a9a9a862-36a7-4cc2-a5ed-5343508276d6 00:36:14.553 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=a9a9a862-36a7-4cc2-a5ed-5343508276d6 00:36:14.553 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:14.553 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:36:14.553 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:14.553 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:14.553 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:14.814 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a9a9a862-36a7-4cc2-a5ed-5343508276d6 -t 2000 00:36:14.814 [ 00:36:14.814 { 00:36:14.814 "name": "a9a9a862-36a7-4cc2-a5ed-5343508276d6", 00:36:14.814 "aliases": [ 00:36:14.814 "lvs/lvol" 00:36:14.814 ], 00:36:14.814 "product_name": "Logical Volume", 00:36:14.814 "block_size": 4096, 00:36:14.814 "num_blocks": 38912, 00:36:14.814 "uuid": "a9a9a862-36a7-4cc2-a5ed-5343508276d6", 00:36:14.814 "assigned_rate_limits": { 00:36:14.814 "rw_ios_per_sec": 0, 00:36:14.814 "rw_mbytes_per_sec": 0, 00:36:14.814 "r_mbytes_per_sec": 0, 00:36:14.814 "w_mbytes_per_sec": 0 00:36:14.814 }, 00:36:14.814 "claimed": false, 00:36:14.814 "zoned": false, 00:36:14.814 "supported_io_types": { 00:36:14.814 "read": true, 00:36:14.814 "write": true, 00:36:14.814 "unmap": true, 00:36:14.814 "flush": false, 00:36:14.814 "reset": true, 00:36:14.814 "nvme_admin": false, 00:36:14.814 "nvme_io": false, 00:36:14.814 "nvme_io_md": false, 00:36:14.814 "write_zeroes": true, 00:36:14.814 "zcopy": false, 00:36:14.814 "get_zone_info": false, 00:36:14.814 "zone_management": false, 00:36:14.814 "zone_append": false, 00:36:14.814 "compare": false, 00:36:14.814 "compare_and_write": false, 00:36:14.814 "abort": false, 00:36:14.814 "seek_hole": true, 00:36:14.814 "seek_data": true, 00:36:14.814 "copy": false, 00:36:14.814 "nvme_iov_md": false 00:36:14.814 }, 00:36:14.814 "driver_specific": { 00:36:14.814 "lvol": { 00:36:14.814 "lvol_store_uuid": "40fc8d9f-4ca7-4747-ab8c-4de4fab8a80e", 00:36:14.814 "base_bdev": "aio_bdev", 00:36:14.814 "thin_provision": false, 00:36:14.814 "num_allocated_clusters": 38, 00:36:14.814 "snapshot": false, 00:36:14.814 "clone": false, 00:36:14.814 "esnap_clone": false 00:36:14.814 } 00:36:14.814 } 00:36:14.814 } 00:36:14.814 ] 00:36:15.074 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:36:15.074 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40fc8d9f-4ca7-4747-ab8c-4de4fab8a80e 00:36:15.074 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:36:15.074 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:36:15.074 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40fc8d9f-4ca7-4747-ab8c-4de4fab8a80e 00:36:15.074 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:36:15.335 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:36:15.335 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a9a9a862-36a7-4cc2-a5ed-5343508276d6 00:36:15.595 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 40fc8d9f-4ca7-4747-ab8c-4de4fab8a80e 00:36:15.595 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:36:15.857 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:15.857 00:36:15.857 real 0m16.056s 00:36:15.857 user 0m15.741s 00:36:15.857 sys 0m1.436s 00:36:15.857 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:15.857 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:36:15.857 ************************************ 00:36:15.857 END TEST lvs_grow_clean 00:36:15.857 ************************************ 00:36:15.857 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:36:15.857 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:15.857 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:15.857 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:36:15.857 ************************************ 00:36:15.857 START TEST lvs_grow_dirty 00:36:15.857 ************************************ 00:36:15.857 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:36:15.857 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:36:15.857 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:36:15.857 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:36:15.857 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:36:15.857 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:36:15.857 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:36:15.857 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:15.857 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:15.857 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:36:16.119 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:36:16.119 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:36:16.381 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=9865acad-a8a6-4780-ab6e-ea8ac02630e5 00:36:16.381 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9865acad-a8a6-4780-ab6e-ea8ac02630e5 00:36:16.381 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:36:16.642 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:36:16.642 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:36:16.642 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9865acad-a8a6-4780-ab6e-ea8ac02630e5 lvol 150 00:36:16.642 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=bb880327-9b58-4054-9132-3e6993c156db 00:36:16.642 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:16.642 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:36:16.903 [2024-11-27 07:31:27.971402] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:36:16.903 [2024-11-27 07:31:27.971552] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:36:16.903 true 00:36:16.903 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9865acad-a8a6-4780-ab6e-ea8ac02630e5 00:36:16.903 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:36:17.165 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:36:17.165 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:17.165 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bb880327-9b58-4054-9132-3e6993c156db 00:36:17.426 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:17.688 [2024-11-27 07:31:28.684034] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:17.688 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:17.688 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2630872 00:36:17.688 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:17.688 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:36:17.688 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2630872 /var/tmp/bdevperf.sock 00:36:17.688 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2630872 ']' 00:36:17.688 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:17.688 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:17.688 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:17.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:17.688 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:17.688 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:36:17.949 [2024-11-27 07:31:28.941433] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:36:17.949 [2024-11-27 07:31:28.941544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2630872 ] 00:36:17.949 [2024-11-27 07:31:29.027998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:17.949 [2024-11-27 07:31:29.059817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:18.519 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:18.519 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:36:18.519 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:36:19.091 Nvme0n1 00:36:19.091 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:36:19.091 [ 00:36:19.091 { 00:36:19.091 "name": "Nvme0n1", 00:36:19.091 "aliases": [ 00:36:19.091 "bb880327-9b58-4054-9132-3e6993c156db" 00:36:19.091 ], 00:36:19.091 "product_name": "NVMe disk", 00:36:19.091 "block_size": 4096, 00:36:19.092 "num_blocks": 38912, 00:36:19.092 "uuid": "bb880327-9b58-4054-9132-3e6993c156db", 00:36:19.092 "numa_id": 0, 00:36:19.092 "assigned_rate_limits": { 00:36:19.092 "rw_ios_per_sec": 0, 00:36:19.092 "rw_mbytes_per_sec": 0, 00:36:19.092 "r_mbytes_per_sec": 0, 00:36:19.092 "w_mbytes_per_sec": 0 00:36:19.092 }, 00:36:19.092 "claimed": false, 00:36:19.092 "zoned": false, 00:36:19.092 "supported_io_types": { 00:36:19.092 "read": true, 00:36:19.092 "write": true, 00:36:19.092 "unmap": true, 00:36:19.092 "flush": true, 00:36:19.092 "reset": true, 00:36:19.092 "nvme_admin": true, 00:36:19.092 "nvme_io": true, 00:36:19.092 "nvme_io_md": false, 00:36:19.092 "write_zeroes": true, 00:36:19.092 "zcopy": false, 00:36:19.092 "get_zone_info": false, 00:36:19.092 "zone_management": false, 00:36:19.092 "zone_append": false, 00:36:19.092 "compare": true, 00:36:19.092 "compare_and_write": true, 00:36:19.092 "abort": true, 00:36:19.092 "seek_hole": false, 00:36:19.092 "seek_data": false, 00:36:19.092 "copy": true, 00:36:19.092 "nvme_iov_md": false 00:36:19.092 }, 00:36:19.092 "memory_domains": [ 00:36:19.092 { 00:36:19.092 "dma_device_id": "system", 00:36:19.092 "dma_device_type": 1 00:36:19.092 } 00:36:19.092 ], 00:36:19.092 "driver_specific": { 00:36:19.092 "nvme": [ 00:36:19.092 { 00:36:19.092 "trid": { 00:36:19.092 "trtype": "TCP", 00:36:19.092 "adrfam": "IPv4", 00:36:19.092 "traddr": "10.0.0.2", 00:36:19.092 "trsvcid": "4420", 00:36:19.092 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:36:19.092 }, 00:36:19.092 "ctrlr_data": { 00:36:19.092 "cntlid": 1, 00:36:19.092 "vendor_id": "0x8086", 00:36:19.092 "model_number": "SPDK bdev Controller", 00:36:19.092 "serial_number": "SPDK0", 00:36:19.092 "firmware_revision": "25.01", 00:36:19.092 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:19.092 "oacs": { 00:36:19.092 "security": 0, 00:36:19.092 "format": 0, 00:36:19.092 "firmware": 0, 00:36:19.092 "ns_manage": 0 00:36:19.092 }, 00:36:19.092 "multi_ctrlr": true, 00:36:19.092 "ana_reporting": false 00:36:19.092 }, 00:36:19.092 "vs": { 00:36:19.092 "nvme_version": "1.3" 00:36:19.092 }, 00:36:19.092 "ns_data": { 00:36:19.092 "id": 1, 00:36:19.092 "can_share": true 00:36:19.092 } 00:36:19.092 } 00:36:19.092 ], 00:36:19.092 "mp_policy": "active_passive" 00:36:19.092 } 00:36:19.092 } 00:36:19.092 ] 00:36:19.353 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2631141 00:36:19.353 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:36:19.353 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:36:19.353 Running I/O for 10 seconds... 00:36:20.297 Latency(us) 00:36:20.297 [2024-11-27T06:31:31.502Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:20.297 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:20.297 Nvme0n1 : 1.00 17653.00 68.96 0.00 0.00 0.00 0.00 0.00 00:36:20.297 [2024-11-27T06:31:31.502Z] =================================================================================================================== 00:36:20.297 [2024-11-27T06:31:31.502Z] Total : 17653.00 68.96 0.00 0.00 0.00 0.00 0.00 00:36:20.297 00:36:21.239 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9865acad-a8a6-4780-ab6e-ea8ac02630e5 00:36:21.239 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:21.239 Nvme0n1 : 2.00 17907.00 69.95 0.00 0.00 0.00 0.00 0.00 00:36:21.239 [2024-11-27T06:31:32.444Z] =================================================================================================================== 00:36:21.239 [2024-11-27T06:31:32.445Z] Total : 17907.00 69.95 0.00 0.00 0.00 0.00 0.00 00:36:21.240 00:36:21.502 true 00:36:21.502 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9865acad-a8a6-4780-ab6e-ea8ac02630e5 00:36:21.502 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:36:21.502 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:36:21.502 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:36:21.502 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2631141 00:36:22.446 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:22.446 Nvme0n1 : 3.00 17991.67 70.28 0.00 0.00 0.00 0.00 0.00 00:36:22.446 [2024-11-27T06:31:33.651Z] =================================================================================================================== 00:36:22.446 [2024-11-27T06:31:33.651Z] Total : 17991.67 70.28 0.00 0.00 0.00 0.00 0.00 00:36:22.446 00:36:23.388 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:23.388 Nvme0n1 : 4.00 18065.75 70.57 0.00 0.00 0.00 0.00 0.00 00:36:23.388 [2024-11-27T06:31:34.593Z] =================================================================================================================== 00:36:23.388 [2024-11-27T06:31:34.593Z] Total : 18065.75 70.57 0.00 0.00 0.00 0.00 0.00 00:36:23.388 00:36:24.329 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:24.329 Nvme0n1 : 5.00 18770.60 73.32 0.00 0.00 0.00 0.00 0.00 00:36:24.329 [2024-11-27T06:31:35.534Z] =================================================================================================================== 00:36:24.329 [2024-11-27T06:31:35.534Z] Total : 18770.60 73.32 0.00 0.00 0.00 0.00 0.00 00:36:24.329 00:36:25.271 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:25.271 Nvme0n1 : 6.00 19917.83 77.80 0.00 0.00 0.00 0.00 0.00 00:36:25.271 [2024-11-27T06:31:36.476Z] =================================================================================================================== 00:36:25.271 [2024-11-27T06:31:36.476Z] Total : 19917.83 77.80 0.00 0.00 0.00 0.00 0.00 00:36:25.271 00:36:26.212 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:26.212 Nvme0n1 : 7.00 20755.43 81.08 0.00 0.00 0.00 0.00 0.00 00:36:26.212 [2024-11-27T06:31:37.417Z] =================================================================================================================== 00:36:26.212 [2024-11-27T06:31:37.417Z] Total : 20755.43 81.08 0.00 0.00 0.00 0.00 0.00 00:36:26.212 00:36:27.595 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:27.595 Nvme0n1 : 8.00 21367.75 83.47 0.00 0.00 0.00 0.00 0.00 00:36:27.595 [2024-11-27T06:31:38.800Z] =================================================================================================================== 00:36:27.595 [2024-11-27T06:31:38.800Z] Total : 21367.75 83.47 0.00 0.00 0.00 0.00 0.00 00:36:27.595 00:36:28.535 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:28.536 Nvme0n1 : 9.00 21858.11 85.38 0.00 0.00 0.00 0.00 0.00 00:36:28.536 [2024-11-27T06:31:39.741Z] =================================================================================================================== 00:36:28.536 [2024-11-27T06:31:39.741Z] Total : 21858.11 85.38 0.00 0.00 0.00 0.00 0.00 00:36:28.536 00:36:29.480 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:29.480 Nvme0n1 : 10.00 22237.70 86.87 0.00 0.00 0.00 0.00 0.00 00:36:29.480 [2024-11-27T06:31:40.685Z] =================================================================================================================== 00:36:29.480 [2024-11-27T06:31:40.685Z] Total : 22237.70 86.87 0.00 0.00 0.00 0.00 0.00 00:36:29.480 00:36:29.480 00:36:29.480 Latency(us) 00:36:29.480 [2024-11-27T06:31:40.685Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:29.480 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:29.480 Nvme0n1 : 10.00 22243.26 86.89 0.00 0.00 5752.18 4587.52 31457.28 00:36:29.480 [2024-11-27T06:31:40.685Z] =================================================================================================================== 00:36:29.480 [2024-11-27T06:31:40.685Z] Total : 22243.26 86.89 0.00 0.00 5752.18 4587.52 31457.28 00:36:29.480 { 00:36:29.480 "results": [ 00:36:29.480 { 00:36:29.480 "job": "Nvme0n1", 00:36:29.480 "core_mask": "0x2", 00:36:29.480 "workload": "randwrite", 00:36:29.480 "status": "finished", 00:36:29.480 "queue_depth": 128, 00:36:29.480 "io_size": 4096, 00:36:29.480 "runtime": 10.003256, 00:36:29.480 "iops": 22243.257595326962, 00:36:29.480 "mibps": 86.88772498174595, 00:36:29.480 "io_failed": 0, 00:36:29.480 "io_timeout": 0, 00:36:29.480 "avg_latency_us": 5752.17887909635, 00:36:29.480 "min_latency_us": 4587.52, 00:36:29.480 "max_latency_us": 31457.28 00:36:29.480 } 00:36:29.480 ], 00:36:29.480 "core_count": 1 00:36:29.480 } 00:36:29.480 07:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2630872 00:36:29.480 07:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2630872 ']' 00:36:29.480 07:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2630872 00:36:29.480 07:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:36:29.480 07:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:29.480 07:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2630872 00:36:29.480 07:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:29.480 07:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:29.480 07:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2630872' 00:36:29.480 killing process with pid 2630872 00:36:29.480 07:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2630872 00:36:29.480 Received shutdown signal, test time was about 10.000000 seconds 00:36:29.480 00:36:29.480 Latency(us) 00:36:29.480 [2024-11-27T06:31:40.685Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:29.480 [2024-11-27T06:31:40.685Z] =================================================================================================================== 00:36:29.480 [2024-11-27T06:31:40.685Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:29.480 07:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2630872 00:36:29.480 07:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:29.743 07:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:30.003 07:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9865acad-a8a6-4780-ab6e-ea8ac02630e5 00:36:30.003 07:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:36:30.003 07:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:36:30.003 07:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:36:30.003 07:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2627336 00:36:30.003 07:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2627336 00:36:30.265 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2627336 Killed "${NVMF_APP[@]}" "$@" 00:36:30.265 07:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:36:30.265 07:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:36:30.265 07:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:30.265 07:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:30.265 07:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:36:30.265 07:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2633180 00:36:30.265 07:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2633180 00:36:30.265 07:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:36:30.265 07:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2633180 ']' 00:36:30.265 07:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:30.265 07:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:30.265 07:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:30.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:30.265 07:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:30.265 07:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:36:30.265 [2024-11-27 07:31:41.288422] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:30.265 [2024-11-27 07:31:41.289520] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:36:30.265 [2024-11-27 07:31:41.289577] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:30.265 [2024-11-27 07:31:41.384353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:30.265 [2024-11-27 07:31:41.414177] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:30.265 [2024-11-27 07:31:41.414204] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:30.265 [2024-11-27 07:31:41.414209] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:30.265 [2024-11-27 07:31:41.414214] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:30.265 [2024-11-27 07:31:41.414218] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:30.265 [2024-11-27 07:31:41.414633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:30.265 [2024-11-27 07:31:41.465331] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:30.265 [2024-11-27 07:31:41.465510] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:31.210 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:31.210 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:36:31.210 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:31.210 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:31.210 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:36:31.210 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:31.210 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:36:31.210 [2024-11-27 07:31:42.305074] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:36:31.210 [2024-11-27 07:31:42.305343] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:36:31.210 [2024-11-27 07:31:42.305434] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:36:31.210 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:36:31.210 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev bb880327-9b58-4054-9132-3e6993c156db 00:36:31.210 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=bb880327-9b58-4054-9132-3e6993c156db 00:36:31.210 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:31.210 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:36:31.210 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:31.210 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:31.210 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:31.471 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bb880327-9b58-4054-9132-3e6993c156db -t 2000 00:36:31.471 [ 00:36:31.471 { 00:36:31.471 "name": "bb880327-9b58-4054-9132-3e6993c156db", 00:36:31.471 "aliases": [ 00:36:31.471 "lvs/lvol" 00:36:31.471 ], 00:36:31.471 "product_name": "Logical Volume", 00:36:31.471 "block_size": 4096, 00:36:31.471 "num_blocks": 38912, 00:36:31.471 "uuid": "bb880327-9b58-4054-9132-3e6993c156db", 00:36:31.471 "assigned_rate_limits": { 00:36:31.471 "rw_ios_per_sec": 0, 00:36:31.471 "rw_mbytes_per_sec": 0, 00:36:31.471 "r_mbytes_per_sec": 0, 00:36:31.471 "w_mbytes_per_sec": 0 00:36:31.471 }, 00:36:31.471 "claimed": false, 00:36:31.471 "zoned": false, 00:36:31.471 "supported_io_types": { 00:36:31.471 "read": true, 00:36:31.471 "write": true, 00:36:31.471 "unmap": true, 00:36:31.471 "flush": false, 00:36:31.471 "reset": true, 00:36:31.471 "nvme_admin": false, 00:36:31.471 "nvme_io": false, 00:36:31.471 "nvme_io_md": false, 00:36:31.471 "write_zeroes": true, 00:36:31.471 "zcopy": false, 00:36:31.471 "get_zone_info": false, 00:36:31.471 "zone_management": false, 00:36:31.471 "zone_append": false, 00:36:31.471 "compare": false, 00:36:31.471 "compare_and_write": false, 00:36:31.471 "abort": false, 00:36:31.471 "seek_hole": true, 00:36:31.471 "seek_data": true, 00:36:31.471 "copy": false, 00:36:31.471 "nvme_iov_md": false 00:36:31.471 }, 00:36:31.471 "driver_specific": { 00:36:31.471 "lvol": { 00:36:31.471 "lvol_store_uuid": "9865acad-a8a6-4780-ab6e-ea8ac02630e5", 00:36:31.471 "base_bdev": "aio_bdev", 00:36:31.471 "thin_provision": false, 00:36:31.471 "num_allocated_clusters": 38, 00:36:31.471 "snapshot": false, 00:36:31.471 "clone": false, 00:36:31.471 "esnap_clone": false 00:36:31.471 } 00:36:31.471 } 00:36:31.471 } 00:36:31.471 ] 00:36:31.471 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:36:31.471 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9865acad-a8a6-4780-ab6e-ea8ac02630e5 00:36:31.471 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:36:31.731 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:36:31.731 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9865acad-a8a6-4780-ab6e-ea8ac02630e5 00:36:31.731 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:36:31.992 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:36:31.992 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:36:31.992 [2024-11-27 07:31:43.167145] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:36:32.254 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9865acad-a8a6-4780-ab6e-ea8ac02630e5 00:36:32.254 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:36:32.254 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9865acad-a8a6-4780-ab6e-ea8ac02630e5 00:36:32.254 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:32.254 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:32.254 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:32.254 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:32.254 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:32.254 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:32.254 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:32.254 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:36:32.254 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9865acad-a8a6-4780-ab6e-ea8ac02630e5 00:36:32.254 request: 00:36:32.254 { 00:36:32.254 "uuid": "9865acad-a8a6-4780-ab6e-ea8ac02630e5", 00:36:32.254 "method": "bdev_lvol_get_lvstores", 00:36:32.254 "req_id": 1 00:36:32.254 } 00:36:32.254 Got JSON-RPC error response 00:36:32.254 response: 00:36:32.254 { 00:36:32.254 "code": -19, 00:36:32.254 "message": "No such device" 00:36:32.254 } 00:36:32.254 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:36:32.254 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:32.254 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:32.254 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:32.254 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:36:32.516 aio_bdev 00:36:32.516 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev bb880327-9b58-4054-9132-3e6993c156db 00:36:32.516 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=bb880327-9b58-4054-9132-3e6993c156db 00:36:32.516 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:32.516 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:36:32.516 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:32.516 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:32.516 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:32.776 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bb880327-9b58-4054-9132-3e6993c156db -t 2000 00:36:32.776 [ 00:36:32.776 { 00:36:32.776 "name": "bb880327-9b58-4054-9132-3e6993c156db", 00:36:32.776 "aliases": [ 00:36:32.776 "lvs/lvol" 00:36:32.776 ], 00:36:32.776 "product_name": "Logical Volume", 00:36:32.776 "block_size": 4096, 00:36:32.776 "num_blocks": 38912, 00:36:32.776 "uuid": "bb880327-9b58-4054-9132-3e6993c156db", 00:36:32.776 "assigned_rate_limits": { 00:36:32.776 "rw_ios_per_sec": 0, 00:36:32.776 "rw_mbytes_per_sec": 0, 00:36:32.776 "r_mbytes_per_sec": 0, 00:36:32.776 "w_mbytes_per_sec": 0 00:36:32.776 }, 00:36:32.776 "claimed": false, 00:36:32.776 "zoned": false, 00:36:32.776 "supported_io_types": { 00:36:32.776 "read": true, 00:36:32.776 "write": true, 00:36:32.776 "unmap": true, 00:36:32.776 "flush": false, 00:36:32.776 "reset": true, 00:36:32.776 "nvme_admin": false, 00:36:32.776 "nvme_io": false, 00:36:32.776 "nvme_io_md": false, 00:36:32.776 "write_zeroes": true, 00:36:32.776 "zcopy": false, 00:36:32.776 "get_zone_info": false, 00:36:32.776 "zone_management": false, 00:36:32.776 "zone_append": false, 00:36:32.776 "compare": false, 00:36:32.776 "compare_and_write": false, 00:36:32.776 "abort": false, 00:36:32.776 "seek_hole": true, 00:36:32.777 "seek_data": true, 00:36:32.777 "copy": false, 00:36:32.777 "nvme_iov_md": false 00:36:32.777 }, 00:36:32.777 "driver_specific": { 00:36:32.777 "lvol": { 00:36:32.777 "lvol_store_uuid": "9865acad-a8a6-4780-ab6e-ea8ac02630e5", 00:36:32.777 "base_bdev": "aio_bdev", 00:36:32.777 "thin_provision": false, 00:36:32.777 "num_allocated_clusters": 38, 00:36:32.777 "snapshot": false, 00:36:32.777 "clone": false, 00:36:32.777 "esnap_clone": false 00:36:32.777 } 00:36:32.777 } 00:36:32.777 } 00:36:32.777 ] 00:36:32.777 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:36:32.777 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9865acad-a8a6-4780-ab6e-ea8ac02630e5 00:36:32.777 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:36:33.037 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:36:33.037 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9865acad-a8a6-4780-ab6e-ea8ac02630e5 00:36:33.037 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:36:33.299 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:36:33.299 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bb880327-9b58-4054-9132-3e6993c156db 00:36:33.299 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9865acad-a8a6-4780-ab6e-ea8ac02630e5 00:36:33.559 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:36:33.820 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:33.820 00:36:33.820 real 0m17.769s 00:36:33.820 user 0m35.624s 00:36:33.820 sys 0m3.140s 00:36:33.820 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:33.820 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:36:33.821 ************************************ 00:36:33.821 END TEST lvs_grow_dirty 00:36:33.821 ************************************ 00:36:33.821 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:36:33.821 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:36:33.821 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:36:33.821 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:36:33.821 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:36:33.821 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:36:33.821 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:36:33.821 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:36:33.821 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:36:33.821 nvmf_trace.0 00:36:33.821 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:36:33.821 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:36:33.821 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:33.821 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:36:33.821 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:33.821 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:36:33.821 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:33.821 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:33.821 rmmod nvme_tcp 00:36:33.821 rmmod nvme_fabrics 00:36:33.821 rmmod nvme_keyring 00:36:33.821 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:33.821 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:36:33.821 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:36:33.821 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2633180 ']' 00:36:33.821 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2633180 00:36:33.821 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2633180 ']' 00:36:33.821 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2633180 00:36:33.821 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:36:33.821 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:33.821 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2633180 00:36:34.082 07:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:34.082 07:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:34.082 07:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2633180' 00:36:34.082 killing process with pid 2633180 00:36:34.082 07:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2633180 00:36:34.082 07:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2633180 00:36:34.082 07:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:34.082 07:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:34.082 07:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:34.082 07:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:36:34.082 07:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:36:34.082 07:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:34.082 07:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:36:34.082 07:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:34.082 07:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:34.082 07:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:34.082 07:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:34.082 07:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:36.631 00:36:36.631 real 0m45.329s 00:36:36.631 user 0m54.394s 00:36:36.631 sys 0m10.792s 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:36:36.631 ************************************ 00:36:36.631 END TEST nvmf_lvs_grow 00:36:36.631 ************************************ 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:36.631 ************************************ 00:36:36.631 START TEST nvmf_bdev_io_wait 00:36:36.631 ************************************ 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:36:36.631 * Looking for test storage... 00:36:36.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:36.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:36.631 --rc genhtml_branch_coverage=1 00:36:36.631 --rc genhtml_function_coverage=1 00:36:36.631 --rc genhtml_legend=1 00:36:36.631 --rc geninfo_all_blocks=1 00:36:36.631 --rc geninfo_unexecuted_blocks=1 00:36:36.631 00:36:36.631 ' 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:36.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:36.631 --rc genhtml_branch_coverage=1 00:36:36.631 --rc genhtml_function_coverage=1 00:36:36.631 --rc genhtml_legend=1 00:36:36.631 --rc geninfo_all_blocks=1 00:36:36.631 --rc geninfo_unexecuted_blocks=1 00:36:36.631 00:36:36.631 ' 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:36.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:36.631 --rc genhtml_branch_coverage=1 00:36:36.631 --rc genhtml_function_coverage=1 00:36:36.631 --rc genhtml_legend=1 00:36:36.631 --rc geninfo_all_blocks=1 00:36:36.631 --rc geninfo_unexecuted_blocks=1 00:36:36.631 00:36:36.631 ' 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:36.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:36.631 --rc genhtml_branch_coverage=1 00:36:36.631 --rc genhtml_function_coverage=1 00:36:36.631 --rc genhtml_legend=1 00:36:36.631 --rc geninfo_all_blocks=1 00:36:36.631 --rc geninfo_unexecuted_blocks=1 00:36:36.631 00:36:36.631 ' 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:36.631 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:36:36.632 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:44.897 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:44.897 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:44.897 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:44.897 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:44.897 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:44.898 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:44.898 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:44.898 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:44.898 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:44.898 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:44.898 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:44.898 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:44.898 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:44.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:44.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.556 ms 00:36:44.898 00:36:44.898 --- 10.0.0.2 ping statistics --- 00:36:44.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:44.898 rtt min/avg/max/mdev = 0.556/0.556/0.556/0.000 ms 00:36:44.898 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:44.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:44.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:36:44.898 00:36:44.898 --- 10.0.0.1 ping statistics --- 00:36:44.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:44.898 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:36:44.898 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:44.898 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:36:44.898 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:44.898 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:44.898 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:44.898 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:44.898 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:44.898 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:44.898 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:44.898 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:36:44.898 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:44.898 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:44.898 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:44.898 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2638216 00:36:44.898 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2638216 00:36:44.898 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:36:44.898 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2638216 ']' 00:36:44.898 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:44.898 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:44.898 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:44.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:44.898 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:44.898 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:44.898 [2024-11-27 07:31:55.171595] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:44.898 [2024-11-27 07:31:55.172746] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:36:44.898 [2024-11-27 07:31:55.172798] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:44.898 [2024-11-27 07:31:55.271746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:44.898 [2024-11-27 07:31:55.326913] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:44.898 [2024-11-27 07:31:55.326965] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:44.898 [2024-11-27 07:31:55.326974] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:44.898 [2024-11-27 07:31:55.326981] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:44.898 [2024-11-27 07:31:55.326987] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:44.898 [2024-11-27 07:31:55.329031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:44.898 [2024-11-27 07:31:55.329211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:44.898 [2024-11-27 07:31:55.329309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:44.898 [2024-11-27 07:31:55.329306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:44.898 [2024-11-27 07:31:55.329965] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:44.898 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:44.898 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:36:44.898 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:44.898 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:44.898 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:44.898 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:44.898 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:36:44.898 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.898 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:44.898 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.898 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:36:44.898 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.898 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:44.898 [2024-11-27 07:31:56.098730] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:45.161 [2024-11-27 07:31:56.099436] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:45.161 [2024-11-27 07:31:56.099516] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:45.161 [2024-11-27 07:31:56.099690] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:45.161 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.161 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:45.161 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.161 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:45.161 [2024-11-27 07:31:56.110177] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:45.161 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.161 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:45.161 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.161 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:45.161 Malloc0 00:36:45.161 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.161 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:45.161 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.161 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:45.161 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.161 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:45.161 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.161 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:45.161 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.161 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:45.161 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.161 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:45.161 [2024-11-27 07:31:56.186643] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:45.161 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.161 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2638423 00:36:45.161 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2638425 00:36:45.161 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:36:45.161 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:36:45.161 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:36:45.161 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:36:45.161 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:45.161 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:45.161 { 00:36:45.161 "params": { 00:36:45.161 "name": "Nvme$subsystem", 00:36:45.161 "trtype": "$TEST_TRANSPORT", 00:36:45.161 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:45.161 "adrfam": "ipv4", 00:36:45.161 "trsvcid": "$NVMF_PORT", 00:36:45.161 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:45.161 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:45.161 "hdgst": ${hdgst:-false}, 00:36:45.161 "ddgst": ${ddgst:-false} 00:36:45.161 }, 00:36:45.161 "method": "bdev_nvme_attach_controller" 00:36:45.161 } 00:36:45.161 EOF 00:36:45.161 )") 00:36:45.161 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2638427 00:36:45.161 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:36:45.161 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:36:45.161 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:36:45.162 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:36:45.162 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:45.162 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:45.162 { 00:36:45.162 "params": { 00:36:45.162 "name": "Nvme$subsystem", 00:36:45.162 "trtype": "$TEST_TRANSPORT", 00:36:45.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:45.162 "adrfam": "ipv4", 00:36:45.162 "trsvcid": "$NVMF_PORT", 00:36:45.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:45.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:45.162 "hdgst": ${hdgst:-false}, 00:36:45.162 "ddgst": ${ddgst:-false} 00:36:45.162 }, 00:36:45.162 "method": "bdev_nvme_attach_controller" 00:36:45.162 } 00:36:45.162 EOF 00:36:45.162 )") 00:36:45.162 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2638430 00:36:45.162 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:36:45.162 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:36:45.162 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:36:45.162 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:36:45.162 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:36:45.162 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:36:45.162 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:45.162 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:45.162 { 00:36:45.162 "params": { 00:36:45.162 "name": "Nvme$subsystem", 00:36:45.162 "trtype": "$TEST_TRANSPORT", 00:36:45.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:45.162 "adrfam": "ipv4", 00:36:45.162 "trsvcid": "$NVMF_PORT", 00:36:45.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:45.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:45.162 "hdgst": ${hdgst:-false}, 00:36:45.162 "ddgst": ${ddgst:-false} 00:36:45.162 }, 00:36:45.162 "method": "bdev_nvme_attach_controller" 00:36:45.162 } 00:36:45.162 EOF 00:36:45.162 )") 00:36:45.162 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:36:45.162 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:36:45.162 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:36:45.162 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:36:45.162 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:36:45.162 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:45.162 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:45.162 { 00:36:45.162 "params": { 00:36:45.162 "name": "Nvme$subsystem", 00:36:45.162 "trtype": "$TEST_TRANSPORT", 00:36:45.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:45.162 "adrfam": "ipv4", 00:36:45.162 "trsvcid": "$NVMF_PORT", 00:36:45.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:45.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:45.162 "hdgst": ${hdgst:-false}, 00:36:45.162 "ddgst": ${ddgst:-false} 00:36:45.162 }, 00:36:45.162 "method": "bdev_nvme_attach_controller" 00:36:45.162 } 00:36:45.162 EOF 00:36:45.162 )") 00:36:45.162 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:36:45.162 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2638423 00:36:45.162 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:36:45.162 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:36:45.162 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:36:45.162 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:36:45.162 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:36:45.162 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:45.162 "params": { 00:36:45.162 "name": "Nvme1", 00:36:45.162 "trtype": "tcp", 00:36:45.162 "traddr": "10.0.0.2", 00:36:45.162 "adrfam": "ipv4", 00:36:45.162 "trsvcid": "4420", 00:36:45.162 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:45.162 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:45.162 "hdgst": false, 00:36:45.162 "ddgst": false 00:36:45.162 }, 00:36:45.162 "method": "bdev_nvme_attach_controller" 00:36:45.162 }' 00:36:45.162 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:36:45.162 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:36:45.162 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:45.162 "params": { 00:36:45.162 "name": "Nvme1", 00:36:45.162 "trtype": "tcp", 00:36:45.162 "traddr": "10.0.0.2", 00:36:45.162 "adrfam": "ipv4", 00:36:45.162 "trsvcid": "4420", 00:36:45.162 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:45.162 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:45.162 "hdgst": false, 00:36:45.162 "ddgst": false 00:36:45.162 }, 00:36:45.162 "method": "bdev_nvme_attach_controller" 00:36:45.162 }' 00:36:45.162 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:36:45.162 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:45.162 "params": { 00:36:45.162 "name": "Nvme1", 00:36:45.162 "trtype": "tcp", 00:36:45.162 "traddr": "10.0.0.2", 00:36:45.162 "adrfam": "ipv4", 00:36:45.162 "trsvcid": "4420", 00:36:45.162 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:45.162 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:45.162 "hdgst": false, 00:36:45.162 "ddgst": false 00:36:45.162 }, 00:36:45.162 "method": "bdev_nvme_attach_controller" 00:36:45.162 }' 00:36:45.162 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:36:45.162 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:45.162 "params": { 00:36:45.162 "name": "Nvme1", 00:36:45.162 "trtype": "tcp", 00:36:45.162 "traddr": "10.0.0.2", 00:36:45.162 "adrfam": "ipv4", 00:36:45.162 "trsvcid": "4420", 00:36:45.162 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:45.162 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:45.162 "hdgst": false, 00:36:45.162 "ddgst": false 00:36:45.162 }, 00:36:45.162 "method": "bdev_nvme_attach_controller" 00:36:45.162 }' 00:36:45.162 [2024-11-27 07:31:56.244671] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:36:45.162 [2024-11-27 07:31:56.244747] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:36:45.162 [2024-11-27 07:31:56.246588] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:36:45.162 [2024-11-27 07:31:56.246653] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:36:45.162 [2024-11-27 07:31:56.251050] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:36:45.162 [2024-11-27 07:31:56.251112] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:36:45.162 [2024-11-27 07:31:56.255842] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:36:45.162 [2024-11-27 07:31:56.255904] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:36:45.424 [2024-11-27 07:31:56.465507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:45.424 [2024-11-27 07:31:56.507982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:36:45.424 [2024-11-27 07:31:56.557029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:45.424 [2024-11-27 07:31:56.599753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:45.424 [2024-11-27 07:31:56.626897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:45.685 [2024-11-27 07:31:56.664311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:36:45.685 [2024-11-27 07:31:56.691978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:45.685 [2024-11-27 07:31:56.728689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:36:45.685 Running I/O for 1 seconds... 00:36:45.685 Running I/O for 1 seconds... 00:36:45.685 Running I/O for 1 seconds... 00:36:45.946 Running I/O for 1 seconds... 00:36:46.889 183128.00 IOPS, 715.34 MiB/s 00:36:46.889 Latency(us) 00:36:46.889 [2024-11-27T06:31:58.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:46.889 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:36:46.889 Nvme1n1 : 1.00 182762.98 713.92 0.00 0.00 696.61 296.96 1993.39 00:36:46.889 [2024-11-27T06:31:58.094Z] =================================================================================================================== 00:36:46.889 [2024-11-27T06:31:58.094Z] Total : 182762.98 713.92 0.00 0.00 696.61 296.96 1993.39 00:36:46.889 11928.00 IOPS, 46.59 MiB/s [2024-11-27T06:31:58.094Z] 10189.00 IOPS, 39.80 MiB/s 00:36:46.889 Latency(us) 00:36:46.889 [2024-11-27T06:31:58.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:46.889 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:36:46.889 Nvme1n1 : 1.01 11985.69 46.82 0.00 0.00 10642.18 5707.09 14745.60 00:36:46.889 [2024-11-27T06:31:58.094Z] =================================================================================================================== 00:36:46.889 [2024-11-27T06:31:58.094Z] Total : 11985.69 46.82 0.00 0.00 10642.18 5707.09 14745.60 00:36:46.889 00:36:46.889 Latency(us) 00:36:46.889 [2024-11-27T06:31:58.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:46.889 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:36:46.889 Nvme1n1 : 1.01 10240.57 40.00 0.00 0.00 12449.35 2348.37 17257.81 00:36:46.889 [2024-11-27T06:31:58.094Z] =================================================================================================================== 00:36:46.889 [2024-11-27T06:31:58.094Z] Total : 10240.57 40.00 0.00 0.00 12449.35 2348.37 17257.81 00:36:46.889 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2638425 00:36:46.889 9737.00 IOPS, 38.04 MiB/s 00:36:46.889 Latency(us) 00:36:46.889 [2024-11-27T06:31:58.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:46.889 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:36:46.889 Nvme1n1 : 1.01 9803.32 38.29 0.00 0.00 13014.83 4887.89 20753.07 00:36:46.889 [2024-11-27T06:31:58.094Z] =================================================================================================================== 00:36:46.889 [2024-11-27T06:31:58.094Z] Total : 9803.32 38.29 0.00 0.00 13014.83 4887.89 20753.07 00:36:47.150 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2638427 00:36:47.150 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2638430 00:36:47.150 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:47.150 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.150 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:47.150 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.150 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:36:47.150 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:36:47.150 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:47.150 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:36:47.150 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:47.150 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:36:47.150 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:47.150 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:47.150 rmmod nvme_tcp 00:36:47.150 rmmod nvme_fabrics 00:36:47.150 rmmod nvme_keyring 00:36:47.150 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:47.150 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:36:47.150 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:36:47.150 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2638216 ']' 00:36:47.150 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2638216 00:36:47.150 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2638216 ']' 00:36:47.150 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2638216 00:36:47.150 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:36:47.150 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:47.150 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2638216 00:36:47.150 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:47.150 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:47.150 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2638216' 00:36:47.150 killing process with pid 2638216 00:36:47.150 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2638216 00:36:47.150 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2638216 00:36:47.413 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:47.413 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:47.413 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:47.413 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:36:47.413 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:36:47.413 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:47.413 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:36:47.413 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:47.413 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:47.413 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:47.413 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:47.413 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:49.327 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:49.327 00:36:49.327 real 0m13.135s 00:36:49.327 user 0m15.787s 00:36:49.327 sys 0m7.710s 00:36:49.327 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:49.327 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:49.327 ************************************ 00:36:49.327 END TEST nvmf_bdev_io_wait 00:36:49.327 ************************************ 00:36:49.589 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:36:49.589 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:49.589 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:49.589 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:49.589 ************************************ 00:36:49.589 START TEST nvmf_queue_depth 00:36:49.589 ************************************ 00:36:49.589 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:36:49.589 * Looking for test storage... 00:36:49.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:49.589 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:49.589 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:36:49.589 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:49.589 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:49.589 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:49.589 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:49.589 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:49.589 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:36:49.589 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:36:49.589 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:36:49.589 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:36:49.589 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:36:49.589 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:36:49.589 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:36:49.589 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:49.589 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:36:49.589 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:36:49.589 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:49.589 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:49.589 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:36:49.589 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:36:49.589 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:49.589 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:36:49.589 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:36:49.589 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:36:49.589 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:36:49.589 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:49.589 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:36:49.589 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:36:49.590 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:49.590 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:49.590 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:36:49.590 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:49.590 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:49.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:49.590 --rc genhtml_branch_coverage=1 00:36:49.590 --rc genhtml_function_coverage=1 00:36:49.590 --rc genhtml_legend=1 00:36:49.590 --rc geninfo_all_blocks=1 00:36:49.590 --rc geninfo_unexecuted_blocks=1 00:36:49.590 00:36:49.590 ' 00:36:49.590 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:49.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:49.590 --rc genhtml_branch_coverage=1 00:36:49.590 --rc genhtml_function_coverage=1 00:36:49.590 --rc genhtml_legend=1 00:36:49.590 --rc geninfo_all_blocks=1 00:36:49.590 --rc geninfo_unexecuted_blocks=1 00:36:49.590 00:36:49.590 ' 00:36:49.590 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:49.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:49.590 --rc genhtml_branch_coverage=1 00:36:49.590 --rc genhtml_function_coverage=1 00:36:49.590 --rc genhtml_legend=1 00:36:49.590 --rc geninfo_all_blocks=1 00:36:49.590 --rc geninfo_unexecuted_blocks=1 00:36:49.590 00:36:49.590 ' 00:36:49.590 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:49.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:49.590 --rc genhtml_branch_coverage=1 00:36:49.590 --rc genhtml_function_coverage=1 00:36:49.590 --rc genhtml_legend=1 00:36:49.590 --rc geninfo_all_blocks=1 00:36:49.590 --rc geninfo_unexecuted_blocks=1 00:36:49.590 00:36:49.590 ' 00:36:49.590 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:49.590 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:36:49.590 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:49.590 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:49.590 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:49.590 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:49.590 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:49.590 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:49.590 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:49.590 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:49.590 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:49.590 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:36:49.852 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:57.996 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:57.996 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:36:57.996 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:57.996 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:57.997 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:57.997 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:57.997 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:57.997 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:57.997 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:57.997 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:57.997 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:57.997 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:57.997 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:57.997 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:57.997 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:57.997 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:57.997 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:57.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:57.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:36:57.997 00:36:57.998 --- 10.0.0.2 ping statistics --- 00:36:57.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:57.998 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:36:57.998 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:57.998 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:57.998 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:36:57.998 00:36:57.998 --- 10.0.0.1 ping statistics --- 00:36:57.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:57.998 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:36:57.998 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:57.998 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:36:57.998 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:57.998 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:57.998 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:57.998 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:57.998 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:57.998 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:57.998 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:57.998 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:36:57.998 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:57.998 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:57.998 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:57.998 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2642941 00:36:57.998 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2642941 00:36:57.998 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:36:57.998 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2642941 ']' 00:36:57.998 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:57.998 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:57.998 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:57.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:57.998 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:57.998 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:57.998 [2024-11-27 07:32:08.384280] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:57.998 [2024-11-27 07:32:08.385406] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:36:57.998 [2024-11-27 07:32:08.385462] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:57.998 [2024-11-27 07:32:08.488856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:57.998 [2024-11-27 07:32:08.538922] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:57.998 [2024-11-27 07:32:08.538970] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:57.998 [2024-11-27 07:32:08.538979] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:57.998 [2024-11-27 07:32:08.538986] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:57.998 [2024-11-27 07:32:08.538992] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:57.998 [2024-11-27 07:32:08.539732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:57.998 [2024-11-27 07:32:08.617960] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:57.998 [2024-11-27 07:32:08.618238] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:58.258 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:58.258 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:36:58.258 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:58.258 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:58.258 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:58.258 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:58.258 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:58.258 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.258 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:58.258 [2024-11-27 07:32:09.252588] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:58.258 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.258 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:58.258 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.258 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:58.258 Malloc0 00:36:58.258 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.258 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:58.258 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.258 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:58.258 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.258 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:58.258 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.259 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:58.259 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.259 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:58.259 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.259 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:58.259 [2024-11-27 07:32:09.344649] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:58.259 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.259 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2643249 00:36:58.259 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:36:58.259 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:58.259 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2643249 /var/tmp/bdevperf.sock 00:36:58.259 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2643249 ']' 00:36:58.259 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:58.259 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:58.259 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:58.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:58.259 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:58.259 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:58.259 [2024-11-27 07:32:09.401691] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:36:58.259 [2024-11-27 07:32:09.401756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2643249 ] 00:36:58.519 [2024-11-27 07:32:09.492391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:58.519 [2024-11-27 07:32:09.544968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:59.089 07:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:59.089 07:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:36:59.089 07:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:36:59.089 07:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.089 07:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:59.349 NVMe0n1 00:36:59.349 07:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.349 07:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:36:59.349 Running I/O for 10 seconds... 00:37:01.685 8622.00 IOPS, 33.68 MiB/s [2024-11-27T06:32:13.833Z] 8877.00 IOPS, 34.68 MiB/s [2024-11-27T06:32:14.778Z] 9570.33 IOPS, 37.38 MiB/s [2024-11-27T06:32:15.723Z] 10497.75 IOPS, 41.01 MiB/s [2024-11-27T06:32:16.667Z] 11146.40 IOPS, 43.54 MiB/s [2024-11-27T06:32:17.610Z] 11584.17 IOPS, 45.25 MiB/s [2024-11-27T06:32:18.559Z] 11849.00 IOPS, 46.29 MiB/s [2024-11-27T06:32:19.942Z] 12074.62 IOPS, 47.17 MiB/s [2024-11-27T06:32:20.883Z] 12278.67 IOPS, 47.96 MiB/s [2024-11-27T06:32:20.883Z] 12395.80 IOPS, 48.42 MiB/s 00:37:09.678 Latency(us) 00:37:09.678 [2024-11-27T06:32:20.883Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:09.678 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:37:09.678 Verification LBA range: start 0x0 length 0x4000 00:37:09.678 NVMe0n1 : 10.04 12440.05 48.59 0.00 0.00 82048.22 10485.76 66846.72 00:37:09.678 [2024-11-27T06:32:20.883Z] =================================================================================================================== 00:37:09.678 [2024-11-27T06:32:20.883Z] Total : 12440.05 48.59 0.00 0.00 82048.22 10485.76 66846.72 00:37:09.678 { 00:37:09.678 "results": [ 00:37:09.678 { 00:37:09.678 "job": "NVMe0n1", 00:37:09.678 "core_mask": "0x1", 00:37:09.678 "workload": "verify", 00:37:09.678 "status": "finished", 00:37:09.678 "verify_range": { 00:37:09.678 "start": 0, 00:37:09.678 "length": 16384 00:37:09.678 }, 00:37:09.678 "queue_depth": 1024, 00:37:09.678 "io_size": 4096, 00:37:09.678 "runtime": 10.043927, 00:37:09.678 "iops": 12440.054572280344, 00:37:09.678 "mibps": 48.59396317297009, 00:37:09.678 "io_failed": 0, 00:37:09.678 "io_timeout": 0, 00:37:09.678 "avg_latency_us": 82048.22312180363, 00:37:09.678 "min_latency_us": 10485.76, 00:37:09.678 "max_latency_us": 66846.72 00:37:09.678 } 00:37:09.678 ], 00:37:09.678 "core_count": 1 00:37:09.678 } 00:37:09.678 07:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2643249 00:37:09.678 07:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2643249 ']' 00:37:09.678 07:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2643249 00:37:09.678 07:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:37:09.678 07:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:09.678 07:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2643249 00:37:09.678 07:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:09.678 07:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:09.678 07:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2643249' 00:37:09.678 killing process with pid 2643249 00:37:09.678 07:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2643249 00:37:09.678 Received shutdown signal, test time was about 10.000000 seconds 00:37:09.678 00:37:09.678 Latency(us) 00:37:09.678 [2024-11-27T06:32:20.883Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:09.678 [2024-11-27T06:32:20.883Z] =================================================================================================================== 00:37:09.678 [2024-11-27T06:32:20.883Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:09.678 07:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2643249 00:37:09.678 07:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:37:09.678 07:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:37:09.678 07:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:09.678 07:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:37:09.678 07:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:09.678 07:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:37:09.678 07:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:09.678 07:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:09.678 rmmod nvme_tcp 00:37:09.678 rmmod nvme_fabrics 00:37:09.678 rmmod nvme_keyring 00:37:09.678 07:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:09.678 07:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:37:09.678 07:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:37:09.678 07:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2642941 ']' 00:37:09.678 07:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2642941 00:37:09.678 07:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2642941 ']' 00:37:09.678 07:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2642941 00:37:09.678 07:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:37:09.678 07:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:09.678 07:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2642941 00:37:09.938 07:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:09.939 07:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:09.939 07:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2642941' 00:37:09.939 killing process with pid 2642941 00:37:09.939 07:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2642941 00:37:09.939 07:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2642941 00:37:09.939 07:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:09.939 07:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:09.939 07:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:09.939 07:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:37:09.939 07:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:37:09.939 07:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:09.939 07:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:37:09.939 07:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:09.939 07:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:09.939 07:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:09.939 07:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:09.939 07:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:12.484 00:37:12.484 real 0m22.551s 00:37:12.484 user 0m24.429s 00:37:12.484 sys 0m7.634s 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:12.484 ************************************ 00:37:12.484 END TEST nvmf_queue_depth 00:37:12.484 ************************************ 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:12.484 ************************************ 00:37:12.484 START TEST nvmf_target_multipath 00:37:12.484 ************************************ 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:37:12.484 * Looking for test storage... 00:37:12.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:12.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:12.484 --rc genhtml_branch_coverage=1 00:37:12.484 --rc genhtml_function_coverage=1 00:37:12.484 --rc genhtml_legend=1 00:37:12.484 --rc geninfo_all_blocks=1 00:37:12.484 --rc geninfo_unexecuted_blocks=1 00:37:12.484 00:37:12.484 ' 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:12.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:12.484 --rc genhtml_branch_coverage=1 00:37:12.484 --rc genhtml_function_coverage=1 00:37:12.484 --rc genhtml_legend=1 00:37:12.484 --rc geninfo_all_blocks=1 00:37:12.484 --rc geninfo_unexecuted_blocks=1 00:37:12.484 00:37:12.484 ' 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:12.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:12.484 --rc genhtml_branch_coverage=1 00:37:12.484 --rc genhtml_function_coverage=1 00:37:12.484 --rc genhtml_legend=1 00:37:12.484 --rc geninfo_all_blocks=1 00:37:12.484 --rc geninfo_unexecuted_blocks=1 00:37:12.484 00:37:12.484 ' 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:12.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:12.484 --rc genhtml_branch_coverage=1 00:37:12.484 --rc genhtml_function_coverage=1 00:37:12.484 --rc genhtml_legend=1 00:37:12.484 --rc geninfo_all_blocks=1 00:37:12.484 --rc geninfo_unexecuted_blocks=1 00:37:12.484 00:37:12.484 ' 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:12.484 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:37:12.485 07:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:20.627 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:20.627 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:20.627 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:20.628 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:20.628 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:20.628 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:20.628 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:37:20.628 00:37:20.628 --- 10.0.0.2 ping statistics --- 00:37:20.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:20.628 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:20.628 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:20.628 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:37:20.628 00:37:20.628 --- 10.0.0.1 ping statistics --- 00:37:20.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:20.628 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:37:20.628 only one NIC for nvmf test 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:20.628 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:20.628 rmmod nvme_tcp 00:37:20.628 rmmod nvme_fabrics 00:37:20.628 rmmod nvme_keyring 00:37:20.628 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:20.628 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:37:20.628 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:37:20.628 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:37:20.628 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:20.628 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:20.628 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:20.628 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:37:20.628 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:37:20.628 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:20.628 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:37:20.628 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:20.628 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:20.628 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:20.628 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:20.628 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:22.015 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:22.015 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:37:22.015 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:37:22.015 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:22.015 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:37:22.015 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:22.015 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:37:22.015 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:22.015 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:22.015 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:22.015 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:37:22.015 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:37:22.015 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:37:22.015 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:22.015 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:22.015 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:22.015 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:37:22.015 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:37:22.015 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:22.015 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:37:22.015 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:22.015 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:22.015 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:22.015 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:22.015 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:22.015 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:22.015 00:37:22.015 real 0m9.966s 00:37:22.015 user 0m2.180s 00:37:22.015 sys 0m5.740s 00:37:22.015 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:22.015 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:37:22.015 ************************************ 00:37:22.015 END TEST nvmf_target_multipath 00:37:22.015 ************************************ 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:22.277 ************************************ 00:37:22.277 START TEST nvmf_zcopy 00:37:22.277 ************************************ 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:37:22.277 * Looking for test storage... 00:37:22.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:22.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:22.277 --rc genhtml_branch_coverage=1 00:37:22.277 --rc genhtml_function_coverage=1 00:37:22.277 --rc genhtml_legend=1 00:37:22.277 --rc geninfo_all_blocks=1 00:37:22.277 --rc geninfo_unexecuted_blocks=1 00:37:22.277 00:37:22.277 ' 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:22.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:22.277 --rc genhtml_branch_coverage=1 00:37:22.277 --rc genhtml_function_coverage=1 00:37:22.277 --rc genhtml_legend=1 00:37:22.277 --rc geninfo_all_blocks=1 00:37:22.277 --rc geninfo_unexecuted_blocks=1 00:37:22.277 00:37:22.277 ' 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:22.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:22.277 --rc genhtml_branch_coverage=1 00:37:22.277 --rc genhtml_function_coverage=1 00:37:22.277 --rc genhtml_legend=1 00:37:22.277 --rc geninfo_all_blocks=1 00:37:22.277 --rc geninfo_unexecuted_blocks=1 00:37:22.277 00:37:22.277 ' 00:37:22.277 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:22.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:22.277 --rc genhtml_branch_coverage=1 00:37:22.278 --rc genhtml_function_coverage=1 00:37:22.278 --rc genhtml_legend=1 00:37:22.278 --rc geninfo_all_blocks=1 00:37:22.278 --rc geninfo_unexecuted_blocks=1 00:37:22.278 00:37:22.278 ' 00:37:22.278 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:22.278 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:37:22.278 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:22.278 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:22.278 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:22.278 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:22.278 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:22.278 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:22.278 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:22.278 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:22.278 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:37:22.540 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:30.718 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:30.718 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:30.718 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:30.719 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:30.719 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:30.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:30.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.421 ms 00:37:30.719 00:37:30.719 --- 10.0.0.2 ping statistics --- 00:37:30.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:30.719 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:30.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:30.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:37:30.719 00:37:30.719 --- 10.0.0.1 ping statistics --- 00:37:30.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:30.719 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:30.719 07:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:30.719 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:37:30.719 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:30.719 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:30.719 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:30.719 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2653611 00:37:30.719 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2653611 00:37:30.719 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:37:30.719 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2653611 ']' 00:37:30.719 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:30.719 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:30.719 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:30.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:30.719 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:30.719 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:30.719 [2024-11-27 07:32:41.108396] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:30.719 [2024-11-27 07:32:41.109545] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:37:30.719 [2024-11-27 07:32:41.109598] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:30.719 [2024-11-27 07:32:41.209052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:30.719 [2024-11-27 07:32:41.258801] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:30.719 [2024-11-27 07:32:41.258849] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:30.719 [2024-11-27 07:32:41.258857] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:30.719 [2024-11-27 07:32:41.258865] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:30.719 [2024-11-27 07:32:41.258871] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:30.719 [2024-11-27 07:32:41.259593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:30.719 [2024-11-27 07:32:41.336577] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:30.719 [2024-11-27 07:32:41.336851] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:30.980 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:30.980 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:37:30.980 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:30.980 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:30.980 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:30.980 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:30.980 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:37:30.980 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:37:30.980 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.980 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:30.980 [2024-11-27 07:32:41.976457] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:30.980 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.980 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:30.980 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.980 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:30.980 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.980 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:30.981 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.981 07:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:30.981 [2024-11-27 07:32:42.004777] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:30.981 07:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.981 07:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:30.981 07:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.981 07:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:30.981 07:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.981 07:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:37:30.981 07:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.981 07:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:30.981 malloc0 00:37:30.981 07:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.981 07:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:37:30.981 07:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.981 07:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:30.981 07:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.981 07:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:37:30.981 07:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:37:30.981 07:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:37:30.981 07:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:37:30.981 07:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:30.981 07:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:30.981 { 00:37:30.981 "params": { 00:37:30.981 "name": "Nvme$subsystem", 00:37:30.981 "trtype": "$TEST_TRANSPORT", 00:37:30.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:30.981 "adrfam": "ipv4", 00:37:30.981 "trsvcid": "$NVMF_PORT", 00:37:30.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:30.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:30.981 "hdgst": ${hdgst:-false}, 00:37:30.981 "ddgst": ${ddgst:-false} 00:37:30.981 }, 00:37:30.981 "method": "bdev_nvme_attach_controller" 00:37:30.981 } 00:37:30.981 EOF 00:37:30.981 )") 00:37:30.981 07:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:37:30.981 07:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:37:30.981 07:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:37:30.981 07:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:30.981 "params": { 00:37:30.981 "name": "Nvme1", 00:37:30.981 "trtype": "tcp", 00:37:30.981 "traddr": "10.0.0.2", 00:37:30.981 "adrfam": "ipv4", 00:37:30.981 "trsvcid": "4420", 00:37:30.981 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:30.981 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:30.981 "hdgst": false, 00:37:30.981 "ddgst": false 00:37:30.981 }, 00:37:30.981 "method": "bdev_nvme_attach_controller" 00:37:30.981 }' 00:37:30.981 [2024-11-27 07:32:42.108627] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:37:30.981 [2024-11-27 07:32:42.108695] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2653850 ] 00:37:31.241 [2024-11-27 07:32:42.199747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:31.241 [2024-11-27 07:32:42.252444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:31.503 Running I/O for 10 seconds... 00:37:33.833 6421.00 IOPS, 50.16 MiB/s [2024-11-27T06:32:45.981Z] 6483.50 IOPS, 50.65 MiB/s [2024-11-27T06:32:46.924Z] 6494.00 IOPS, 50.73 MiB/s [2024-11-27T06:32:47.866Z] 6516.25 IOPS, 50.91 MiB/s [2024-11-27T06:32:48.808Z] 7071.60 IOPS, 55.25 MiB/s [2024-11-27T06:32:49.749Z] 7511.83 IOPS, 58.69 MiB/s [2024-11-27T06:32:50.690Z] 7826.00 IOPS, 61.14 MiB/s [2024-11-27T06:32:51.669Z] 8054.75 IOPS, 62.93 MiB/s [2024-11-27T06:32:53.085Z] 8239.44 IOPS, 64.37 MiB/s [2024-11-27T06:32:53.085Z] 8380.60 IOPS, 65.47 MiB/s 00:37:41.880 Latency(us) 00:37:41.880 [2024-11-27T06:32:53.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:41.880 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:37:41.880 Verification LBA range: start 0x0 length 0x1000 00:37:41.880 Nvme1n1 : 10.01 8385.06 65.51 0.00 0.00 15220.24 2143.57 27088.21 00:37:41.880 [2024-11-27T06:32:53.085Z] =================================================================================================================== 00:37:41.880 [2024-11-27T06:32:53.085Z] Total : 8385.06 65.51 0.00 0.00 15220.24 2143.57 27088.21 00:37:41.880 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2655818 00:37:41.880 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:37:41.880 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:41.880 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:37:41.880 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:37:41.880 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:37:41.880 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:37:41.880 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:41.880 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:41.880 { 00:37:41.880 "params": { 00:37:41.880 "name": "Nvme$subsystem", 00:37:41.880 "trtype": "$TEST_TRANSPORT", 00:37:41.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:41.880 "adrfam": "ipv4", 00:37:41.880 "trsvcid": "$NVMF_PORT", 00:37:41.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:41.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:41.880 "hdgst": ${hdgst:-false}, 00:37:41.880 "ddgst": ${ddgst:-false} 00:37:41.880 }, 00:37:41.880 "method": "bdev_nvme_attach_controller" 00:37:41.880 } 00:37:41.880 EOF 00:37:41.880 )") 00:37:41.880 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:37:41.880 [2024-11-27 07:32:52.763991] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:41.880 [2024-11-27 07:32:52.764019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:41.880 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:37:41.881 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:37:41.881 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:41.881 "params": { 00:37:41.881 "name": "Nvme1", 00:37:41.881 "trtype": "tcp", 00:37:41.881 "traddr": "10.0.0.2", 00:37:41.881 "adrfam": "ipv4", 00:37:41.881 "trsvcid": "4420", 00:37:41.881 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:41.881 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:41.881 "hdgst": false, 00:37:41.881 "ddgst": false 00:37:41.881 }, 00:37:41.881 "method": "bdev_nvme_attach_controller" 00:37:41.881 }' 00:37:41.881 [2024-11-27 07:32:52.775960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:41.881 [2024-11-27 07:32:52.775970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:41.881 [2024-11-27 07:32:52.787966] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:41.881 [2024-11-27 07:32:52.787974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:41.881 [2024-11-27 07:32:52.799958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:41.881 [2024-11-27 07:32:52.799966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:41.881 [2024-11-27 07:32:52.804049] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:37:41.881 [2024-11-27 07:32:52.804096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2655818 ] 00:37:41.881 [2024-11-27 07:32:52.811960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:41.881 [2024-11-27 07:32:52.811969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:41.881 [2024-11-27 07:32:52.823958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:41.881 [2024-11-27 07:32:52.823966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:41.881 [2024-11-27 07:32:52.835958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:41.881 [2024-11-27 07:32:52.835966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:41.881 [2024-11-27 07:32:52.847957] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:41.881 [2024-11-27 07:32:52.847965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:41.881 [2024-11-27 07:32:52.859958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:41.881 [2024-11-27 07:32:52.859966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:41.881 [2024-11-27 07:32:52.871958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:41.881 [2024-11-27 07:32:52.871965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:41.881 [2024-11-27 07:32:52.883960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:41.881 [2024-11-27 07:32:52.883968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:41.881 [2024-11-27 07:32:52.888107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:41.881 [2024-11-27 07:32:52.895959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:41.881 [2024-11-27 07:32:52.895968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:41.881 [2024-11-27 07:32:52.907958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:41.881 [2024-11-27 07:32:52.907968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:41.881 [2024-11-27 07:32:52.917805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:41.881 [2024-11-27 07:32:52.919958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:41.881 [2024-11-27 07:32:52.919966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:41.881 [2024-11-27 07:32:52.931963] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:41.881 [2024-11-27 07:32:52.931973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:41.881 [2024-11-27 07:32:52.943963] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:41.881 [2024-11-27 07:32:52.943976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:41.881 [2024-11-27 07:32:52.955960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:41.881 [2024-11-27 07:32:52.955976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:41.881 [2024-11-27 07:32:52.967959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:41.881 [2024-11-27 07:32:52.967969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:41.881 [2024-11-27 07:32:52.979958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:41.881 [2024-11-27 07:32:52.979966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:41.881 [2024-11-27 07:32:52.991968] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:41.881 [2024-11-27 07:32:52.991986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:41.881 [2024-11-27 07:32:53.003963] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:41.881 [2024-11-27 07:32:53.003974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:41.881 [2024-11-27 07:32:53.015962] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:41.881 [2024-11-27 07:32:53.015973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:41.881 [2024-11-27 07:32:53.027960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:41.881 [2024-11-27 07:32:53.027970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:41.881 [2024-11-27 07:32:53.039960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:41.881 [2024-11-27 07:32:53.039967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:41.881 [2024-11-27 07:32:53.051959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:41.881 [2024-11-27 07:32:53.051966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:41.881 [2024-11-27 07:32:53.063959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:41.881 [2024-11-27 07:32:53.063967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:41.881 [2024-11-27 07:32:53.075961] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:41.881 [2024-11-27 07:32:53.075971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.142 [2024-11-27 07:32:53.087959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.142 [2024-11-27 07:32:53.087968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.142 [2024-11-27 07:32:53.099957] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.142 [2024-11-27 07:32:53.099965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.142 [2024-11-27 07:32:53.111960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.142 [2024-11-27 07:32:53.111970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.142 [2024-11-27 07:32:53.123957] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.142 [2024-11-27 07:32:53.123964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.142 [2024-11-27 07:32:53.135958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.142 [2024-11-27 07:32:53.135964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.142 [2024-11-27 07:32:53.147958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.142 [2024-11-27 07:32:53.147964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.142 [2024-11-27 07:32:53.159961] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.142 [2024-11-27 07:32:53.159972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.142 [2024-11-27 07:32:53.171962] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.142 [2024-11-27 07:32:53.171974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.142 Running I/O for 5 seconds... 00:37:42.142 [2024-11-27 07:32:53.186598] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.142 [2024-11-27 07:32:53.186618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.142 [2024-11-27 07:32:53.199985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.142 [2024-11-27 07:32:53.200001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.142 [2024-11-27 07:32:53.212805] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.142 [2024-11-27 07:32:53.212821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.142 [2024-11-27 07:32:53.227726] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.142 [2024-11-27 07:32:53.227741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.142 [2024-11-27 07:32:53.240744] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.142 [2024-11-27 07:32:53.240759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.142 [2024-11-27 07:32:53.255657] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.142 [2024-11-27 07:32:53.255672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.142 [2024-11-27 07:32:53.268473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.142 [2024-11-27 07:32:53.268487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.142 [2024-11-27 07:32:53.282854] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.142 [2024-11-27 07:32:53.282869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.142 [2024-11-27 07:32:53.295997] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.142 [2024-11-27 07:32:53.296012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.142 [2024-11-27 07:32:53.309500] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.142 [2024-11-27 07:32:53.309514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.142 [2024-11-27 07:32:53.323424] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.142 [2024-11-27 07:32:53.323438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.143 [2024-11-27 07:32:53.336303] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.143 [2024-11-27 07:32:53.336317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.404 [2024-11-27 07:32:53.351004] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.404 [2024-11-27 07:32:53.351020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.404 [2024-11-27 07:32:53.364102] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.404 [2024-11-27 07:32:53.364117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.404 [2024-11-27 07:32:53.377418] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.404 [2024-11-27 07:32:53.377432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.404 [2024-11-27 07:32:53.391592] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.404 [2024-11-27 07:32:53.391607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.404 [2024-11-27 07:32:53.404538] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.404 [2024-11-27 07:32:53.404552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.404 [2024-11-27 07:32:53.418757] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.404 [2024-11-27 07:32:53.418772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.404 [2024-11-27 07:32:53.431697] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.404 [2024-11-27 07:32:53.431711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.404 [2024-11-27 07:32:53.444417] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.404 [2024-11-27 07:32:53.444435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.404 [2024-11-27 07:32:53.458444] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.404 [2024-11-27 07:32:53.458458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.404 [2024-11-27 07:32:53.471556] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.404 [2024-11-27 07:32:53.471571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.404 [2024-11-27 07:32:53.484144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.404 [2024-11-27 07:32:53.484163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.404 [2024-11-27 07:32:53.496824] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.404 [2024-11-27 07:32:53.496838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.404 [2024-11-27 07:32:53.510921] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.404 [2024-11-27 07:32:53.510936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.404 [2024-11-27 07:32:53.523940] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.404 [2024-11-27 07:32:53.523954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.404 [2024-11-27 07:32:53.537127] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.405 [2024-11-27 07:32:53.537142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.405 [2024-11-27 07:32:53.551206] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.405 [2024-11-27 07:32:53.551221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.405 [2024-11-27 07:32:53.564297] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.405 [2024-11-27 07:32:53.564311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.405 [2024-11-27 07:32:53.578607] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.405 [2024-11-27 07:32:53.578621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.405 [2024-11-27 07:32:53.591722] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.405 [2024-11-27 07:32:53.591736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.405 [2024-11-27 07:32:53.604685] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.405 [2024-11-27 07:32:53.604699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.665 [2024-11-27 07:32:53.619211] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.665 [2024-11-27 07:32:53.619226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.665 [2024-11-27 07:32:53.632106] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.666 [2024-11-27 07:32:53.632120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.666 [2024-11-27 07:32:53.644831] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.666 [2024-11-27 07:32:53.644846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.666 [2024-11-27 07:32:53.659119] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.666 [2024-11-27 07:32:53.659133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.666 [2024-11-27 07:32:53.672428] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.666 [2024-11-27 07:32:53.672442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.666 [2024-11-27 07:32:53.687051] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.666 [2024-11-27 07:32:53.687066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.666 [2024-11-27 07:32:53.699523] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.666 [2024-11-27 07:32:53.699538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.666 [2024-11-27 07:32:53.712417] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.666 [2024-11-27 07:32:53.712431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.666 [2024-11-27 07:32:53.727105] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.666 [2024-11-27 07:32:53.727120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.666 [2024-11-27 07:32:53.740039] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.666 [2024-11-27 07:32:53.740054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.666 [2024-11-27 07:32:53.752622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.666 [2024-11-27 07:32:53.752636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.666 [2024-11-27 07:32:53.767018] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.666 [2024-11-27 07:32:53.767033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.666 [2024-11-27 07:32:53.780275] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.666 [2024-11-27 07:32:53.780289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.666 [2024-11-27 07:32:53.795361] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.666 [2024-11-27 07:32:53.795375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.666 [2024-11-27 07:32:53.808175] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.666 [2024-11-27 07:32:53.808190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.666 [2024-11-27 07:32:53.821005] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.666 [2024-11-27 07:32:53.821019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.666 [2024-11-27 07:32:53.834952] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.666 [2024-11-27 07:32:53.834967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.666 [2024-11-27 07:32:53.847752] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.666 [2024-11-27 07:32:53.847766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.666 [2024-11-27 07:32:53.860465] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.666 [2024-11-27 07:32:53.860479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.927 [2024-11-27 07:32:53.875060] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.927 [2024-11-27 07:32:53.875074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.927 [2024-11-27 07:32:53.888156] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.927 [2024-11-27 07:32:53.888175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.927 [2024-11-27 07:32:53.900884] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.927 [2024-11-27 07:32:53.900898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.927 [2024-11-27 07:32:53.915322] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.927 [2024-11-27 07:32:53.915336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.927 [2024-11-27 07:32:53.928369] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.927 [2024-11-27 07:32:53.928383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.927 [2024-11-27 07:32:53.942981] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.927 [2024-11-27 07:32:53.942996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.927 [2024-11-27 07:32:53.956044] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.927 [2024-11-27 07:32:53.956059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.927 [2024-11-27 07:32:53.968914] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.927 [2024-11-27 07:32:53.968928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.927 [2024-11-27 07:32:53.982937] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.927 [2024-11-27 07:32:53.982952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.927 [2024-11-27 07:32:53.996058] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.927 [2024-11-27 07:32:53.996073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.927 [2024-11-27 07:32:54.008620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.927 [2024-11-27 07:32:54.008634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.927 [2024-11-27 07:32:54.022778] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.927 [2024-11-27 07:32:54.022793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.927 [2024-11-27 07:32:54.035648] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.927 [2024-11-27 07:32:54.035662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.927 [2024-11-27 07:32:54.048457] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.927 [2024-11-27 07:32:54.048471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.927 [2024-11-27 07:32:54.063147] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.927 [2024-11-27 07:32:54.063168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.927 [2024-11-27 07:32:54.075812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.927 [2024-11-27 07:32:54.075827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.927 [2024-11-27 07:32:54.088638] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.927 [2024-11-27 07:32:54.088651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.927 [2024-11-27 07:32:54.103133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.927 [2024-11-27 07:32:54.103148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.927 [2024-11-27 07:32:54.115833] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.927 [2024-11-27 07:32:54.115848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:42.927 [2024-11-27 07:32:54.128551] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:42.927 [2024-11-27 07:32:54.128567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.188 [2024-11-27 07:32:54.142905] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.188 [2024-11-27 07:32:54.142921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.188 [2024-11-27 07:32:54.156072] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.188 [2024-11-27 07:32:54.156088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.188 [2024-11-27 07:32:54.168595] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.188 [2024-11-27 07:32:54.168610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.188 19189.00 IOPS, 149.91 MiB/s [2024-11-27T06:32:54.393Z] [2024-11-27 07:32:54.182761] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.188 [2024-11-27 07:32:54.182776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.188 [2024-11-27 07:32:54.195576] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.188 [2024-11-27 07:32:54.195590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.188 [2024-11-27 07:32:54.208676] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.188 [2024-11-27 07:32:54.208690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.188 [2024-11-27 07:32:54.222858] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.188 [2024-11-27 07:32:54.222873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.188 [2024-11-27 07:32:54.235631] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.188 [2024-11-27 07:32:54.235646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.188 [2024-11-27 07:32:54.248471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.188 [2024-11-27 07:32:54.248487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.188 [2024-11-27 07:32:54.263182] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.188 [2024-11-27 07:32:54.263197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.188 [2024-11-27 07:32:54.275928] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.188 [2024-11-27 07:32:54.275943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.188 [2024-11-27 07:32:54.289205] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.188 [2024-11-27 07:32:54.289221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.188 [2024-11-27 07:32:54.302939] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.188 [2024-11-27 07:32:54.302954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.188 [2024-11-27 07:32:54.315708] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.188 [2024-11-27 07:32:54.315724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.188 [2024-11-27 07:32:54.329085] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.188 [2024-11-27 07:32:54.329100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.188 [2024-11-27 07:32:54.343288] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.188 [2024-11-27 07:32:54.343303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.188 [2024-11-27 07:32:54.356572] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.188 [2024-11-27 07:32:54.356587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.188 [2024-11-27 07:32:54.370948] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.188 [2024-11-27 07:32:54.370963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.188 [2024-11-27 07:32:54.384138] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.188 [2024-11-27 07:32:54.384153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.448 [2024-11-27 07:32:54.396863] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.448 [2024-11-27 07:32:54.396878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.448 [2024-11-27 07:32:54.411028] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.448 [2024-11-27 07:32:54.411043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.448 [2024-11-27 07:32:54.424068] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.448 [2024-11-27 07:32:54.424083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.448 [2024-11-27 07:32:54.436569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.448 [2024-11-27 07:32:54.436583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.448 [2024-11-27 07:32:54.451572] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.448 [2024-11-27 07:32:54.451591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.448 [2024-11-27 07:32:54.464948] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.448 [2024-11-27 07:32:54.464963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.448 [2024-11-27 07:32:54.479411] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.448 [2024-11-27 07:32:54.479426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.448 [2024-11-27 07:32:54.492254] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.448 [2024-11-27 07:32:54.492268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.448 [2024-11-27 07:32:54.507584] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.448 [2024-11-27 07:32:54.507599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.448 [2024-11-27 07:32:54.520668] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.448 [2024-11-27 07:32:54.520683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.448 [2024-11-27 07:32:54.535121] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.448 [2024-11-27 07:32:54.535136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.448 [2024-11-27 07:32:54.548072] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.448 [2024-11-27 07:32:54.548087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.448 [2024-11-27 07:32:54.561488] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.448 [2024-11-27 07:32:54.561502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.448 [2024-11-27 07:32:54.575407] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.448 [2024-11-27 07:32:54.575421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.448 [2024-11-27 07:32:54.588320] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.448 [2024-11-27 07:32:54.588334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.448 [2024-11-27 07:32:54.603191] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.448 [2024-11-27 07:32:54.603205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.448 [2024-11-27 07:32:54.616203] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.448 [2024-11-27 07:32:54.616218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.448 [2024-11-27 07:32:54.628940] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.448 [2024-11-27 07:32:54.628954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.448 [2024-11-27 07:32:54.643295] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.448 [2024-11-27 07:32:54.643310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.709 [2024-11-27 07:32:54.656176] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.709 [2024-11-27 07:32:54.656191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.709 [2024-11-27 07:32:54.669384] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.709 [2024-11-27 07:32:54.669399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.709 [2024-11-27 07:32:54.683447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.709 [2024-11-27 07:32:54.683463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.709 [2024-11-27 07:32:54.696265] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.709 [2024-11-27 07:32:54.696279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.709 [2024-11-27 07:32:54.711073] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.709 [2024-11-27 07:32:54.711093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.709 [2024-11-27 07:32:54.724163] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.709 [2024-11-27 07:32:54.724178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.709 [2024-11-27 07:32:54.736794] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.709 [2024-11-27 07:32:54.736809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.709 [2024-11-27 07:32:54.751136] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.709 [2024-11-27 07:32:54.751151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.709 [2024-11-27 07:32:54.763944] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.709 [2024-11-27 07:32:54.763958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.709 [2024-11-27 07:32:54.776723] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.709 [2024-11-27 07:32:54.776737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.709 [2024-11-27 07:32:54.791526] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.709 [2024-11-27 07:32:54.791541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.709 [2024-11-27 07:32:54.804757] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.709 [2024-11-27 07:32:54.804771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.709 [2024-11-27 07:32:54.819817] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.709 [2024-11-27 07:32:54.819832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.709 [2024-11-27 07:32:54.832579] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.709 [2024-11-27 07:32:54.832593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.709 [2024-11-27 07:32:54.846877] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.709 [2024-11-27 07:32:54.846892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.709 [2024-11-27 07:32:54.859605] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.709 [2024-11-27 07:32:54.859620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.709 [2024-11-27 07:32:54.872880] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.709 [2024-11-27 07:32:54.872895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.709 [2024-11-27 07:32:54.887057] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.709 [2024-11-27 07:32:54.887071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.709 [2024-11-27 07:32:54.899922] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.709 [2024-11-27 07:32:54.899936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.970 [2024-11-27 07:32:54.912710] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.970 [2024-11-27 07:32:54.912725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.970 [2024-11-27 07:32:54.926835] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.970 [2024-11-27 07:32:54.926849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.970 [2024-11-27 07:32:54.939529] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.970 [2024-11-27 07:32:54.939544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.970 [2024-11-27 07:32:54.952761] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.970 [2024-11-27 07:32:54.952775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.970 [2024-11-27 07:32:54.966972] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.970 [2024-11-27 07:32:54.966990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.970 [2024-11-27 07:32:54.979943] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.970 [2024-11-27 07:32:54.979957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.970 [2024-11-27 07:32:54.992419] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.970 [2024-11-27 07:32:54.992433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.970 [2024-11-27 07:32:55.007276] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.970 [2024-11-27 07:32:55.007291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.970 [2024-11-27 07:32:55.020362] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.970 [2024-11-27 07:32:55.020376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.970 [2024-11-27 07:32:55.035080] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.970 [2024-11-27 07:32:55.035096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.970 [2024-11-27 07:32:55.048018] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.970 [2024-11-27 07:32:55.048033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.970 [2024-11-27 07:32:55.060880] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.970 [2024-11-27 07:32:55.060895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.970 [2024-11-27 07:32:55.075565] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.970 [2024-11-27 07:32:55.075580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.970 [2024-11-27 07:32:55.088745] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.970 [2024-11-27 07:32:55.088759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.970 [2024-11-27 07:32:55.103203] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.970 [2024-11-27 07:32:55.103217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.970 [2024-11-27 07:32:55.115855] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.970 [2024-11-27 07:32:55.115869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.970 [2024-11-27 07:32:55.128468] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.970 [2024-11-27 07:32:55.128481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.970 [2024-11-27 07:32:55.142864] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.970 [2024-11-27 07:32:55.142879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.970 [2024-11-27 07:32:55.155763] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.970 [2024-11-27 07:32:55.155777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:43.970 [2024-11-27 07:32:55.168557] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:43.970 [2024-11-27 07:32:55.168571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.232 19212.00 IOPS, 150.09 MiB/s [2024-11-27T06:32:55.437Z] [2024-11-27 07:32:55.183513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.232 [2024-11-27 07:32:55.183528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.232 [2024-11-27 07:32:55.196630] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.232 [2024-11-27 07:32:55.196645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.232 [2024-11-27 07:32:55.211139] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.232 [2024-11-27 07:32:55.211154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.232 [2024-11-27 07:32:55.224083] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.232 [2024-11-27 07:32:55.224098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.232 [2024-11-27 07:32:55.236599] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.232 [2024-11-27 07:32:55.236613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.232 [2024-11-27 07:32:55.250820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.232 [2024-11-27 07:32:55.250834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.232 [2024-11-27 07:32:55.263604] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.232 [2024-11-27 07:32:55.263619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.232 [2024-11-27 07:32:55.276734] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.232 [2024-11-27 07:32:55.276748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.232 [2024-11-27 07:32:55.290986] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.232 [2024-11-27 07:32:55.291001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.232 [2024-11-27 07:32:55.304127] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.232 [2024-11-27 07:32:55.304141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.232 [2024-11-27 07:32:55.317066] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.232 [2024-11-27 07:32:55.317080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.232 [2024-11-27 07:32:55.330958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.232 [2024-11-27 07:32:55.330972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.232 [2024-11-27 07:32:55.343827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.232 [2024-11-27 07:32:55.343841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.232 [2024-11-27 07:32:55.356674] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.232 [2024-11-27 07:32:55.356687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.232 [2024-11-27 07:32:55.371167] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.232 [2024-11-27 07:32:55.371181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.232 [2024-11-27 07:32:55.384098] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.232 [2024-11-27 07:32:55.384113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.232 [2024-11-27 07:32:55.397019] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.232 [2024-11-27 07:32:55.397033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.232 [2024-11-27 07:32:55.410871] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.232 [2024-11-27 07:32:55.410886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.232 [2024-11-27 07:32:55.423943] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.232 [2024-11-27 07:32:55.423957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.493 [2024-11-27 07:32:55.436887] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.493 [2024-11-27 07:32:55.436901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.493 [2024-11-27 07:32:55.451257] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.493 [2024-11-27 07:32:55.451271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.493 [2024-11-27 07:32:55.463883] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.493 [2024-11-27 07:32:55.463898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.493 [2024-11-27 07:32:55.477013] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.493 [2024-11-27 07:32:55.477027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.493 [2024-11-27 07:32:55.491147] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.493 [2024-11-27 07:32:55.491166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.493 [2024-11-27 07:32:55.504363] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.493 [2024-11-27 07:32:55.504376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.493 [2024-11-27 07:32:55.519140] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.493 [2024-11-27 07:32:55.519154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.493 [2024-11-27 07:32:55.531957] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.493 [2024-11-27 07:32:55.531972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.493 [2024-11-27 07:32:55.544924] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.493 [2024-11-27 07:32:55.544939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.493 [2024-11-27 07:32:55.558948] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.493 [2024-11-27 07:32:55.558962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.493 [2024-11-27 07:32:55.571853] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.493 [2024-11-27 07:32:55.571868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.493 [2024-11-27 07:32:55.584495] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.493 [2024-11-27 07:32:55.584509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.493 [2024-11-27 07:32:55.598916] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.493 [2024-11-27 07:32:55.598931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.493 [2024-11-27 07:32:55.611928] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.493 [2024-11-27 07:32:55.611943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.493 [2024-11-27 07:32:55.624604] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.493 [2024-11-27 07:32:55.624618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.493 [2024-11-27 07:32:55.639111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.493 [2024-11-27 07:32:55.639125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.493 [2024-11-27 07:32:55.652293] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.493 [2024-11-27 07:32:55.652306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.493 [2024-11-27 07:32:55.667679] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.493 [2024-11-27 07:32:55.667694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.493 [2024-11-27 07:32:55.680505] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.493 [2024-11-27 07:32:55.680519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.493 [2024-11-27 07:32:55.694784] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.493 [2024-11-27 07:32:55.694798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.754 [2024-11-27 07:32:55.707659] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.754 [2024-11-27 07:32:55.707674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.754 [2024-11-27 07:32:55.720763] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.754 [2024-11-27 07:32:55.720777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.754 [2024-11-27 07:32:55.735105] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.754 [2024-11-27 07:32:55.735119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.754 [2024-11-27 07:32:55.748109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.754 [2024-11-27 07:32:55.748123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.754 [2024-11-27 07:32:55.760686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.754 [2024-11-27 07:32:55.760700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.754 [2024-11-27 07:32:55.775456] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.754 [2024-11-27 07:32:55.775471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.754 [2024-11-27 07:32:55.788242] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.754 [2024-11-27 07:32:55.788256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.754 [2024-11-27 07:32:55.803083] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.754 [2024-11-27 07:32:55.803097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.754 [2024-11-27 07:32:55.816049] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.754 [2024-11-27 07:32:55.816064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.754 [2024-11-27 07:32:55.829030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.754 [2024-11-27 07:32:55.829045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.754 [2024-11-27 07:32:55.843409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.754 [2024-11-27 07:32:55.843423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.754 [2024-11-27 07:32:55.856500] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.754 [2024-11-27 07:32:55.856514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.754 [2024-11-27 07:32:55.871076] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.755 [2024-11-27 07:32:55.871091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.755 [2024-11-27 07:32:55.883740] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.755 [2024-11-27 07:32:55.883754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.755 [2024-11-27 07:32:55.896733] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.755 [2024-11-27 07:32:55.896748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.755 [2024-11-27 07:32:55.911600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.755 [2024-11-27 07:32:55.911615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.755 [2024-11-27 07:32:55.924423] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.755 [2024-11-27 07:32:55.924438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.755 [2024-11-27 07:32:55.939132] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.755 [2024-11-27 07:32:55.939148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:44.755 [2024-11-27 07:32:55.952146] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:44.755 [2024-11-27 07:32:55.952166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.015 [2024-11-27 07:32:55.964749] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.015 [2024-11-27 07:32:55.964764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.015 [2024-11-27 07:32:55.979395] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.015 [2024-11-27 07:32:55.979410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.015 [2024-11-27 07:32:55.992328] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.015 [2024-11-27 07:32:55.992342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.015 [2024-11-27 07:32:56.006969] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.015 [2024-11-27 07:32:56.006984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.015 [2024-11-27 07:32:56.020040] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.015 [2024-11-27 07:32:56.020055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.015 [2024-11-27 07:32:56.033256] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.015 [2024-11-27 07:32:56.033272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.015 [2024-11-27 07:32:56.047360] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.015 [2024-11-27 07:32:56.047375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.015 [2024-11-27 07:32:56.060044] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.015 [2024-11-27 07:32:56.060059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.015 [2024-11-27 07:32:56.072898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.015 [2024-11-27 07:32:56.072913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.015 [2024-11-27 07:32:56.087323] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.015 [2024-11-27 07:32:56.087338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.015 [2024-11-27 07:32:56.100296] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.015 [2024-11-27 07:32:56.100311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.015 [2024-11-27 07:32:56.115011] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.015 [2024-11-27 07:32:56.115025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.015 [2024-11-27 07:32:56.128115] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.015 [2024-11-27 07:32:56.128130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.015 [2024-11-27 07:32:56.141005] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.016 [2024-11-27 07:32:56.141020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.016 [2024-11-27 07:32:56.154915] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.016 [2024-11-27 07:32:56.154930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.016 [2024-11-27 07:32:56.167694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.016 [2024-11-27 07:32:56.167708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.016 [2024-11-27 07:32:56.180769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.016 [2024-11-27 07:32:56.180783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.016 19213.00 IOPS, 150.10 MiB/s [2024-11-27T06:32:56.221Z] [2024-11-27 07:32:56.195022] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.016 [2024-11-27 07:32:56.195037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.016 [2024-11-27 07:32:56.208020] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.016 [2024-11-27 07:32:56.208035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.277 [2024-11-27 07:32:56.220778] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.277 [2024-11-27 07:32:56.220793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.277 [2024-11-27 07:32:56.235682] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.277 [2024-11-27 07:32:56.235701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.277 [2024-11-27 07:32:56.248791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.277 [2024-11-27 07:32:56.248806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.277 [2024-11-27 07:32:56.263221] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.277 [2024-11-27 07:32:56.263235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.277 [2024-11-27 07:32:56.276231] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.277 [2024-11-27 07:32:56.276246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.277 [2024-11-27 07:32:56.291148] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.277 [2024-11-27 07:32:56.291168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.277 [2024-11-27 07:32:56.303785] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.277 [2024-11-27 07:32:56.303801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.277 [2024-11-27 07:32:56.316267] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.277 [2024-11-27 07:32:56.316282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.277 [2024-11-27 07:32:56.331277] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.277 [2024-11-27 07:32:56.331292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.277 [2024-11-27 07:32:56.344074] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.277 [2024-11-27 07:32:56.344089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.277 [2024-11-27 07:32:56.356845] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.277 [2024-11-27 07:32:56.356860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.277 [2024-11-27 07:32:56.371186] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.277 [2024-11-27 07:32:56.371202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.277 [2024-11-27 07:32:56.384115] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.277 [2024-11-27 07:32:56.384130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.277 [2024-11-27 07:32:56.396888] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.277 [2024-11-27 07:32:56.396902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.277 [2024-11-27 07:32:56.410972] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.277 [2024-11-27 07:32:56.410987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.277 [2024-11-27 07:32:56.423906] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.277 [2024-11-27 07:32:56.423921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.277 [2024-11-27 07:32:56.436839] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.277 [2024-11-27 07:32:56.436853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.277 [2024-11-27 07:32:56.451549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.277 [2024-11-27 07:32:56.451564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.277 [2024-11-27 07:32:56.464765] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.277 [2024-11-27 07:32:56.464780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.277 [2024-11-27 07:32:56.478739] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.277 [2024-11-27 07:32:56.478753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.537 [2024-11-27 07:32:56.491815] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.538 [2024-11-27 07:32:56.491833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.538 [2024-11-27 07:32:56.504373] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.538 [2024-11-27 07:32:56.504387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.538 [2024-11-27 07:32:56.519292] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.538 [2024-11-27 07:32:56.519308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.538 [2024-11-27 07:32:56.532173] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.538 [2024-11-27 07:32:56.532188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.538 [2024-11-27 07:32:56.545577] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.538 [2024-11-27 07:32:56.545592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.538 [2024-11-27 07:32:56.559624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.538 [2024-11-27 07:32:56.559639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.538 [2024-11-27 07:32:56.572391] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.538 [2024-11-27 07:32:56.572405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.538 [2024-11-27 07:32:56.587437] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.538 [2024-11-27 07:32:56.587453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.538 [2024-11-27 07:32:56.600317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.538 [2024-11-27 07:32:56.600331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.538 [2024-11-27 07:32:56.614451] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.538 [2024-11-27 07:32:56.614466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.538 [2024-11-27 07:32:56.627737] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.538 [2024-11-27 07:32:56.627752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.538 [2024-11-27 07:32:56.640468] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.538 [2024-11-27 07:32:56.640482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.538 [2024-11-27 07:32:56.654957] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.538 [2024-11-27 07:32:56.654971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.538 [2024-11-27 07:32:56.668172] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.538 [2024-11-27 07:32:56.668187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.538 [2024-11-27 07:32:56.680874] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.538 [2024-11-27 07:32:56.680888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.538 [2024-11-27 07:32:56.695044] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.538 [2024-11-27 07:32:56.695058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.538 [2024-11-27 07:32:56.708290] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.538 [2024-11-27 07:32:56.708305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.538 [2024-11-27 07:32:56.723466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.538 [2024-11-27 07:32:56.723480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.538 [2024-11-27 07:32:56.736605] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.538 [2024-11-27 07:32:56.736618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.797 [2024-11-27 07:32:56.750959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.797 [2024-11-27 07:32:56.750979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.797 [2024-11-27 07:32:56.763847] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.797 [2024-11-27 07:32:56.763862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.797 [2024-11-27 07:32:56.776351] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.797 [2024-11-27 07:32:56.776365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.797 [2024-11-27 07:32:56.791340] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.797 [2024-11-27 07:32:56.791355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.798 [2024-11-27 07:32:56.804484] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.798 [2024-11-27 07:32:56.804497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.798 [2024-11-27 07:32:56.818710] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.798 [2024-11-27 07:32:56.818725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.798 [2024-11-27 07:32:56.831882] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.798 [2024-11-27 07:32:56.831896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.798 [2024-11-27 07:32:56.844832] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.798 [2024-11-27 07:32:56.844847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.798 [2024-11-27 07:32:56.859065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.798 [2024-11-27 07:32:56.859079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.798 [2024-11-27 07:32:56.872000] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.798 [2024-11-27 07:32:56.872014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.798 [2024-11-27 07:32:56.884851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.798 [2024-11-27 07:32:56.884865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.798 [2024-11-27 07:32:56.898845] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.798 [2024-11-27 07:32:56.898859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.798 [2024-11-27 07:32:56.911695] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.798 [2024-11-27 07:32:56.911709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.798 [2024-11-27 07:32:56.924467] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.798 [2024-11-27 07:32:56.924481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.798 [2024-11-27 07:32:56.939114] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.798 [2024-11-27 07:32:56.939129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.798 [2024-11-27 07:32:56.952285] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.798 [2024-11-27 07:32:56.952299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.798 [2024-11-27 07:32:56.967421] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.798 [2024-11-27 07:32:56.967436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.798 [2024-11-27 07:32:56.980503] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.798 [2024-11-27 07:32:56.980516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:45.798 [2024-11-27 07:32:56.994750] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:45.798 [2024-11-27 07:32:56.994764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.058 [2024-11-27 07:32:57.007476] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.058 [2024-11-27 07:32:57.007491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.058 [2024-11-27 07:32:57.020804] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.058 [2024-11-27 07:32:57.020818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.058 [2024-11-27 07:32:57.034623] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.058 [2024-11-27 07:32:57.034637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.058 [2024-11-27 07:32:57.047473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.058 [2024-11-27 07:32:57.047488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.058 [2024-11-27 07:32:57.060253] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.058 [2024-11-27 07:32:57.060267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.058 [2024-11-27 07:32:57.075555] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.058 [2024-11-27 07:32:57.075569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.058 [2024-11-27 07:32:57.088453] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.058 [2024-11-27 07:32:57.088467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.058 [2024-11-27 07:32:57.103417] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.058 [2024-11-27 07:32:57.103431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.058 [2024-11-27 07:32:57.116110] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.058 [2024-11-27 07:32:57.116124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.058 [2024-11-27 07:32:57.129311] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.058 [2024-11-27 07:32:57.129326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.058 [2024-11-27 07:32:57.143465] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.058 [2024-11-27 07:32:57.143479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.058 [2024-11-27 07:32:57.156648] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.058 [2024-11-27 07:32:57.156662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.058 [2024-11-27 07:32:57.171134] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.058 [2024-11-27 07:32:57.171149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.058 [2024-11-27 07:32:57.184255] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.059 [2024-11-27 07:32:57.184269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.059 19214.50 IOPS, 150.11 MiB/s [2024-11-27T06:32:57.264Z] [2024-11-27 07:32:57.198876] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.059 [2024-11-27 07:32:57.198891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.059 [2024-11-27 07:32:57.211717] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.059 [2024-11-27 07:32:57.211731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.059 [2024-11-27 07:32:57.224562] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.059 [2024-11-27 07:32:57.224576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.059 [2024-11-27 07:32:57.239413] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.059 [2024-11-27 07:32:57.239427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.059 [2024-11-27 07:32:57.252091] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.059 [2024-11-27 07:32:57.252106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.319 [2024-11-27 07:32:57.265215] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.319 [2024-11-27 07:32:57.265230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.319 [2024-11-27 07:32:57.279161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.319 [2024-11-27 07:32:57.279175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.319 [2024-11-27 07:32:57.292081] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.319 [2024-11-27 07:32:57.292096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.319 [2024-11-27 07:32:57.304804] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.319 [2024-11-27 07:32:57.304819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.319 [2024-11-27 07:32:57.319192] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.319 [2024-11-27 07:32:57.319207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.319 [2024-11-27 07:32:57.332267] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.319 [2024-11-27 07:32:57.332280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.319 [2024-11-27 07:32:57.346942] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.319 [2024-11-27 07:32:57.346957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.319 [2024-11-27 07:32:57.359947] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.319 [2024-11-27 07:32:57.359961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.319 [2024-11-27 07:32:57.372729] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.319 [2024-11-27 07:32:57.372742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.319 [2024-11-27 07:32:57.387269] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.319 [2024-11-27 07:32:57.387284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.319 [2024-11-27 07:32:57.400669] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.319 [2024-11-27 07:32:57.400683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.319 [2024-11-27 07:32:57.415107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.319 [2024-11-27 07:32:57.415121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.319 [2024-11-27 07:32:57.427755] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.319 [2024-11-27 07:32:57.427770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.319 [2024-11-27 07:32:57.440642] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.319 [2024-11-27 07:32:57.440656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.319 [2024-11-27 07:32:57.455350] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.319 [2024-11-27 07:32:57.455364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.319 [2024-11-27 07:32:57.468593] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.319 [2024-11-27 07:32:57.468607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.319 [2024-11-27 07:32:57.483154] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.319 [2024-11-27 07:32:57.483173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.319 [2024-11-27 07:32:57.496153] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.319 [2024-11-27 07:32:57.496171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.319 [2024-11-27 07:32:57.508783] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.319 [2024-11-27 07:32:57.508801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.582 [2024-11-27 07:32:57.523338] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.582 [2024-11-27 07:32:57.523354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.582 [2024-11-27 07:32:57.536279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.582 [2024-11-27 07:32:57.536293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.582 [2024-11-27 07:32:57.551460] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.582 [2024-11-27 07:32:57.551475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.582 [2024-11-27 07:32:57.564741] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.582 [2024-11-27 07:32:57.564755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.582 [2024-11-27 07:32:57.579366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.582 [2024-11-27 07:32:57.579380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.582 [2024-11-27 07:32:57.592280] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.582 [2024-11-27 07:32:57.592294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.582 [2024-11-27 07:32:57.606829] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.582 [2024-11-27 07:32:57.606843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.582 [2024-11-27 07:32:57.619626] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.582 [2024-11-27 07:32:57.619640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.582 [2024-11-27 07:32:57.632479] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.582 [2024-11-27 07:32:57.632492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.582 [2024-11-27 07:32:57.647001] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.582 [2024-11-27 07:32:57.647016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.582 [2024-11-27 07:32:57.659728] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.582 [2024-11-27 07:32:57.659744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.582 [2024-11-27 07:32:57.672227] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.582 [2024-11-27 07:32:57.672241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.582 [2024-11-27 07:32:57.687141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.582 [2024-11-27 07:32:57.687157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.582 [2024-11-27 07:32:57.700125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.582 [2024-11-27 07:32:57.700141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.582 [2024-11-27 07:32:57.712917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.582 [2024-11-27 07:32:57.712932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.582 [2024-11-27 07:32:57.727318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.582 [2024-11-27 07:32:57.727333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.582 [2024-11-27 07:32:57.740299] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.582 [2024-11-27 07:32:57.740313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.582 [2024-11-27 07:32:57.755111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.582 [2024-11-27 07:32:57.755126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.582 [2024-11-27 07:32:57.768147] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.582 [2024-11-27 07:32:57.768172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.582 [2024-11-27 07:32:57.780812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.582 [2024-11-27 07:32:57.780827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.843 [2024-11-27 07:32:57.795245] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.843 [2024-11-27 07:32:57.795261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.843 [2024-11-27 07:32:57.808017] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.843 [2024-11-27 07:32:57.808032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.843 [2024-11-27 07:32:57.820673] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.843 [2024-11-27 07:32:57.820687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.843 [2024-11-27 07:32:57.834982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.844 [2024-11-27 07:32:57.834997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.844 [2024-11-27 07:32:57.848106] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.844 [2024-11-27 07:32:57.848122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.844 [2024-11-27 07:32:57.861144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.844 [2024-11-27 07:32:57.861164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.844 [2024-11-27 07:32:57.875329] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.844 [2024-11-27 07:32:57.875344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.844 [2024-11-27 07:32:57.887936] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.844 [2024-11-27 07:32:57.887951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.844 [2024-11-27 07:32:57.901140] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.844 [2024-11-27 07:32:57.901154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.844 [2024-11-27 07:32:57.915377] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.844 [2024-11-27 07:32:57.915392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.844 [2024-11-27 07:32:57.928339] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.844 [2024-11-27 07:32:57.928353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.844 [2024-11-27 07:32:57.943352] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.844 [2024-11-27 07:32:57.943367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.844 [2024-11-27 07:32:57.956335] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.844 [2024-11-27 07:32:57.956349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.844 [2024-11-27 07:32:57.970513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.844 [2024-11-27 07:32:57.970528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.844 [2024-11-27 07:32:57.983718] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.844 [2024-11-27 07:32:57.983733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.844 [2024-11-27 07:32:57.996223] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.844 [2024-11-27 07:32:57.996236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.844 [2024-11-27 07:32:58.010921] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.844 [2024-11-27 07:32:58.010936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.844 [2024-11-27 07:32:58.023831] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.844 [2024-11-27 07:32:58.023849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.844 [2024-11-27 07:32:58.036668] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:46.844 [2024-11-27 07:32:58.036683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:47.104 [2024-11-27 07:32:58.050641] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:47.104 [2024-11-27 07:32:58.050655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:47.104 [2024-11-27 07:32:58.063572] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:47.104 [2024-11-27 07:32:58.063586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:47.104 [2024-11-27 07:32:58.076915] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:47.104 [2024-11-27 07:32:58.076929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:47.104 [2024-11-27 07:32:58.091309] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:47.104 [2024-11-27 07:32:58.091324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:47.104 [2024-11-27 07:32:58.104515] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:47.104 [2024-11-27 07:32:58.104528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:47.104 [2024-11-27 07:32:58.119015] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:47.104 [2024-11-27 07:32:58.119030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:47.104 [2024-11-27 07:32:58.132272] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:47.104 [2024-11-27 07:32:58.132286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:47.104 [2024-11-27 07:32:58.147180] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:47.104 [2024-11-27 07:32:58.147195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:47.104 [2024-11-27 07:32:58.160194] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:47.104 [2024-11-27 07:32:58.160208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:47.104 [2024-11-27 07:32:58.172795] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:47.104 [2024-11-27 07:32:58.172809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:47.104 [2024-11-27 07:32:58.187243] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:47.104 [2024-11-27 07:32:58.187258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:47.104 19210.40 IOPS, 150.08 MiB/s 00:37:47.104 Latency(us) 00:37:47.104 [2024-11-27T06:32:58.309Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:47.104 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:37:47.104 Nvme1n1 : 5.01 19215.16 150.12 0.00 0.00 6655.90 2471.25 11741.87 00:37:47.104 [2024-11-27T06:32:58.309Z] =================================================================================================================== 00:37:47.104 [2024-11-27T06:32:58.309Z] Total : 19215.16 150.12 0.00 0.00 6655.90 2471.25 11741.87 00:37:47.104 [2024-11-27 07:32:58.195964] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:47.104 [2024-11-27 07:32:58.195978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:47.104 [2024-11-27 07:32:58.207964] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:47.104 [2024-11-27 07:32:58.207978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:47.104 [2024-11-27 07:32:58.219967] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:47.104 [2024-11-27 07:32:58.219980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:47.104 [2024-11-27 07:32:58.231963] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:47.104 [2024-11-27 07:32:58.231975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:47.104 [2024-11-27 07:32:58.243963] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:47.104 [2024-11-27 07:32:58.243975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:47.104 [2024-11-27 07:32:58.255960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:47.104 [2024-11-27 07:32:58.255970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:47.104 [2024-11-27 07:32:58.267958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:47.104 [2024-11-27 07:32:58.267968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:47.104 [2024-11-27 07:32:58.279963] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:47.104 [2024-11-27 07:32:58.279975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:47.104 [2024-11-27 07:32:58.291960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:47.104 [2024-11-27 07:32:58.291969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:47.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2655818) - No such process 00:37:47.104 07:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2655818 00:37:47.104 07:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:47.104 07:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.104 07:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:47.364 07:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.364 07:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:47.364 07:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.364 07:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:47.364 delay0 00:37:47.364 07:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.364 07:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:37:47.364 07:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.364 07:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:47.364 07:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.364 07:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:37:47.364 [2024-11-27 07:32:58.498323] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:37:53.945 Initializing NVMe Controllers 00:37:53.945 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:53.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:53.945 Initialization complete. Launching workers. 00:37:53.945 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 3081 00:37:53.945 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 3365, failed to submit 36 00:37:53.945 success 3199, unsuccessful 166, failed 0 00:37:53.945 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:37:53.945 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:37:53.945 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:53.945 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:37:53.945 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:53.945 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:37:53.945 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:53.945 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:53.945 rmmod nvme_tcp 00:37:53.945 rmmod nvme_fabrics 00:37:53.945 rmmod nvme_keyring 00:37:54.206 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:54.206 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:37:54.206 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:37:54.206 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2653611 ']' 00:37:54.206 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2653611 00:37:54.206 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2653611 ']' 00:37:54.206 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2653611 00:37:54.206 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:37:54.206 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:54.206 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2653611 00:37:54.206 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:54.206 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:54.206 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2653611' 00:37:54.206 killing process with pid 2653611 00:37:54.206 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2653611 00:37:54.206 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2653611 00:37:54.206 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:54.206 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:54.206 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:54.206 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:37:54.206 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:37:54.206 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:54.206 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:37:54.206 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:54.206 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:54.206 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:54.206 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:54.206 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:56.754 00:37:56.754 real 0m34.161s 00:37:56.754 user 0m43.523s 00:37:56.754 sys 0m12.453s 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:56.754 ************************************ 00:37:56.754 END TEST nvmf_zcopy 00:37:56.754 ************************************ 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:56.754 ************************************ 00:37:56.754 START TEST nvmf_nmic 00:37:56.754 ************************************ 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:37:56.754 * Looking for test storage... 00:37:56.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:56.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.754 --rc genhtml_branch_coverage=1 00:37:56.754 --rc genhtml_function_coverage=1 00:37:56.754 --rc genhtml_legend=1 00:37:56.754 --rc geninfo_all_blocks=1 00:37:56.754 --rc geninfo_unexecuted_blocks=1 00:37:56.754 00:37:56.754 ' 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:56.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.754 --rc genhtml_branch_coverage=1 00:37:56.754 --rc genhtml_function_coverage=1 00:37:56.754 --rc genhtml_legend=1 00:37:56.754 --rc geninfo_all_blocks=1 00:37:56.754 --rc geninfo_unexecuted_blocks=1 00:37:56.754 00:37:56.754 ' 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:56.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.754 --rc genhtml_branch_coverage=1 00:37:56.754 --rc genhtml_function_coverage=1 00:37:56.754 --rc genhtml_legend=1 00:37:56.754 --rc geninfo_all_blocks=1 00:37:56.754 --rc geninfo_unexecuted_blocks=1 00:37:56.754 00:37:56.754 ' 00:37:56.754 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:56.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.754 --rc genhtml_branch_coverage=1 00:37:56.754 --rc genhtml_function_coverage=1 00:37:56.754 --rc genhtml_legend=1 00:37:56.754 --rc geninfo_all_blocks=1 00:37:56.754 --rc geninfo_unexecuted_blocks=1 00:37:56.754 00:37:56.754 ' 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:37:56.755 07:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:04.935 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:04.935 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:04.935 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:04.935 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:04.935 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:04.935 07:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:04.935 07:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:04.935 07:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:04.935 07:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:04.935 07:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:04.935 07:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:04.935 07:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:04.935 07:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:04.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:04.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:38:04.936 00:38:04.936 --- 10.0.0.2 ping statistics --- 00:38:04.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:04.936 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:38:04.936 07:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:04.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:04.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:38:04.936 00:38:04.936 --- 10.0.0.1 ping statistics --- 00:38:04.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:04.936 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:38:04.936 07:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:04.936 07:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:38:04.936 07:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:04.936 07:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:04.936 07:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:04.936 07:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:04.936 07:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:04.936 07:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:04.936 07:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:04.936 07:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:38:04.936 07:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:04.936 07:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:04.936 07:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:04.936 07:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2662870 00:38:04.936 07:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2662870 00:38:04.936 07:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:38:04.936 07:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2662870 ']' 00:38:04.936 07:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:04.936 07:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:04.936 07:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:04.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:04.936 07:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:04.936 07:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:04.936 [2024-11-27 07:33:15.371273] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:04.936 [2024-11-27 07:33:15.372415] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:38:04.936 [2024-11-27 07:33:15.372467] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:04.936 [2024-11-27 07:33:15.471755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:04.936 [2024-11-27 07:33:15.526130] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:04.936 [2024-11-27 07:33:15.526196] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:04.936 [2024-11-27 07:33:15.526205] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:04.936 [2024-11-27 07:33:15.526212] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:04.936 [2024-11-27 07:33:15.526218] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:04.936 [2024-11-27 07:33:15.528222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:04.936 [2024-11-27 07:33:15.528320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:04.936 [2024-11-27 07:33:15.528488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:04.936 [2024-11-27 07:33:15.528503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:04.936 [2024-11-27 07:33:15.607299] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:04.936 [2024-11-27 07:33:15.608292] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:04.936 [2024-11-27 07:33:15.608602] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:04.936 [2024-11-27 07:33:15.609188] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:04.936 [2024-11-27 07:33:15.609230] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:05.198 [2024-11-27 07:33:16.233607] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:05.198 Malloc0 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:05.198 [2024-11-27 07:33:16.325918] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:38:05.198 test case1: single bdev can't be used in multiple subsystems 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:05.198 [2024-11-27 07:33:16.361221] bdev.c:8507:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:38:05.198 [2024-11-27 07:33:16.361247] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:38:05.198 [2024-11-27 07:33:16.361256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.198 request: 00:38:05.198 { 00:38:05.198 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:38:05.198 "namespace": { 00:38:05.198 "bdev_name": "Malloc0", 00:38:05.198 "no_auto_visible": false, 00:38:05.198 "hide_metadata": false 00:38:05.198 }, 00:38:05.198 "method": "nvmf_subsystem_add_ns", 00:38:05.198 "req_id": 1 00:38:05.198 } 00:38:05.198 Got JSON-RPC error response 00:38:05.198 response: 00:38:05.198 { 00:38:05.198 "code": -32602, 00:38:05.198 "message": "Invalid parameters" 00:38:05.198 } 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:38:05.198 Adding namespace failed - expected result. 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:38:05.198 test case2: host connect to nvmf target in multiple paths 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:05.198 [2024-11-27 07:33:16.373343] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.198 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:38:05.772 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:38:06.343 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:38:06.343 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:38:06.343 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:38:06.343 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:38:06.343 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:38:08.256 07:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:38:08.256 07:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:38:08.256 07:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:38:08.256 07:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:38:08.256 07:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:38:08.256 07:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:38:08.256 07:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:38:08.256 [global] 00:38:08.256 thread=1 00:38:08.256 invalidate=1 00:38:08.256 rw=write 00:38:08.256 time_based=1 00:38:08.256 runtime=1 00:38:08.256 ioengine=libaio 00:38:08.256 direct=1 00:38:08.256 bs=4096 00:38:08.256 iodepth=1 00:38:08.256 norandommap=0 00:38:08.256 numjobs=1 00:38:08.256 00:38:08.256 verify_dump=1 00:38:08.256 verify_backlog=512 00:38:08.256 verify_state_save=0 00:38:08.256 do_verify=1 00:38:08.256 verify=crc32c-intel 00:38:08.256 [job0] 00:38:08.256 filename=/dev/nvme0n1 00:38:08.256 Could not set queue depth (nvme0n1) 00:38:08.517 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:08.517 fio-3.35 00:38:08.517 Starting 1 thread 00:38:09.902 00:38:09.902 job0: (groupid=0, jobs=1): err= 0: pid=2663804: Wed Nov 27 07:33:20 2024 00:38:09.902 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:38:09.902 slat (nsec): min=7357, max=57991, avg=27802.06, stdev=3069.85 00:38:09.902 clat (usec): min=663, max=1207, avg=987.69, stdev=64.74 00:38:09.902 lat (usec): min=691, max=1235, avg=1015.50, stdev=64.62 00:38:09.902 clat percentiles (usec): 00:38:09.902 | 1.00th=[ 824], 5.00th=[ 873], 10.00th=[ 906], 20.00th=[ 938], 00:38:09.902 | 30.00th=[ 963], 40.00th=[ 979], 50.00th=[ 996], 60.00th=[ 1004], 00:38:09.902 | 70.00th=[ 1020], 80.00th=[ 1037], 90.00th=[ 1057], 95.00th=[ 1090], 00:38:09.902 | 99.00th=[ 1139], 99.50th=[ 1156], 99.90th=[ 1205], 99.95th=[ 1205], 00:38:09.902 | 99.99th=[ 1205] 00:38:09.902 write: IOPS=735, BW=2941KiB/s (3012kB/s)(2944KiB/1001msec); 0 zone resets 00:38:09.902 slat (usec): min=9, max=31629, avg=74.38, stdev=1164.75 00:38:09.902 clat (usec): min=149, max=1170, avg=564.83, stdev=101.81 00:38:09.902 lat (usec): min=169, max=32174, avg=639.22, stdev=1168.77 00:38:09.902 clat percentiles (usec): 00:38:09.902 | 1.00th=[ 293], 5.00th=[ 388], 10.00th=[ 441], 20.00th=[ 490], 00:38:09.902 | 30.00th=[ 519], 40.00th=[ 545], 50.00th=[ 570], 60.00th=[ 594], 00:38:09.902 | 70.00th=[ 619], 80.00th=[ 652], 90.00th=[ 685], 95.00th=[ 717], 00:38:09.902 | 99.00th=[ 775], 99.50th=[ 799], 99.90th=[ 1172], 99.95th=[ 1172], 00:38:09.902 | 99.99th=[ 1172] 00:38:09.902 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:38:09.902 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:09.902 lat (usec) : 250=0.24%, 500=13.46%, 750=44.23%, 1000=23.80% 00:38:09.902 lat (msec) : 2=18.27% 00:38:09.902 cpu : usr=3.70%, sys=3.90%, ctx=1251, majf=0, minf=1 00:38:09.902 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:09.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.903 issued rwts: total=512,736,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:09.903 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:09.903 00:38:09.903 Run status group 0 (all jobs): 00:38:09.903 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:38:09.903 WRITE: bw=2941KiB/s (3012kB/s), 2941KiB/s-2941KiB/s (3012kB/s-3012kB/s), io=2944KiB (3015kB), run=1001-1001msec 00:38:09.903 00:38:09.903 Disk stats (read/write): 00:38:09.903 nvme0n1: ios=537/571, merge=0/0, ticks=1459/253, in_queue=1712, util=98.80% 00:38:09.903 07:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:38:09.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:38:09.903 07:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:38:09.903 07:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:38:09.903 07:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:38:09.903 07:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:09.903 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:38:09.903 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:09.903 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:38:09.903 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:38:09.903 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:38:09.903 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:09.903 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:38:09.903 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:09.903 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:38:09.903 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:09.903 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:09.903 rmmod nvme_tcp 00:38:09.903 rmmod nvme_fabrics 00:38:09.903 rmmod nvme_keyring 00:38:09.903 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:10.164 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:38:10.164 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:38:10.164 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2662870 ']' 00:38:10.164 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2662870 00:38:10.164 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2662870 ']' 00:38:10.164 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2662870 00:38:10.164 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:38:10.164 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:10.164 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2662870 00:38:10.164 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:10.164 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:10.164 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2662870' 00:38:10.164 killing process with pid 2662870 00:38:10.164 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2662870 00:38:10.164 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2662870 00:38:10.164 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:10.164 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:10.164 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:10.164 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:38:10.164 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:38:10.164 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:10.164 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:38:10.164 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:10.164 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:10.164 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:10.164 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:10.164 07:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:12.708 00:38:12.708 real 0m15.875s 00:38:12.708 user 0m37.074s 00:38:12.708 sys 0m7.552s 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:12.708 ************************************ 00:38:12.708 END TEST nvmf_nmic 00:38:12.708 ************************************ 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:12.708 ************************************ 00:38:12.708 START TEST nvmf_fio_target 00:38:12.708 ************************************ 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:38:12.708 * Looking for test storage... 00:38:12.708 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:12.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:12.708 --rc genhtml_branch_coverage=1 00:38:12.708 --rc genhtml_function_coverage=1 00:38:12.708 --rc genhtml_legend=1 00:38:12.708 --rc geninfo_all_blocks=1 00:38:12.708 --rc geninfo_unexecuted_blocks=1 00:38:12.708 00:38:12.708 ' 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:12.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:12.708 --rc genhtml_branch_coverage=1 00:38:12.708 --rc genhtml_function_coverage=1 00:38:12.708 --rc genhtml_legend=1 00:38:12.708 --rc geninfo_all_blocks=1 00:38:12.708 --rc geninfo_unexecuted_blocks=1 00:38:12.708 00:38:12.708 ' 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:12.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:12.708 --rc genhtml_branch_coverage=1 00:38:12.708 --rc genhtml_function_coverage=1 00:38:12.708 --rc genhtml_legend=1 00:38:12.708 --rc geninfo_all_blocks=1 00:38:12.708 --rc geninfo_unexecuted_blocks=1 00:38:12.708 00:38:12.708 ' 00:38:12.708 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:12.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:12.709 --rc genhtml_branch_coverage=1 00:38:12.709 --rc genhtml_function_coverage=1 00:38:12.709 --rc genhtml_legend=1 00:38:12.709 --rc geninfo_all_blocks=1 00:38:12.709 --rc geninfo_unexecuted_blocks=1 00:38:12.709 00:38:12.709 ' 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:38:12.709 07:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:20.854 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:20.854 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:20.854 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:20.854 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:20.855 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:20.855 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:20.855 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:20.855 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:20.855 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:20.855 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:20.855 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:38:20.855 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:20.855 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:20.855 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:20.855 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:20.855 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:20.855 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:20.855 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:20.855 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:20.855 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:20.855 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:20.855 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:20.855 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:20.855 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:20.855 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:20.855 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:20.855 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:20.855 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:20.855 07:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:20.855 07:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:20.855 07:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:20.855 07:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:20.855 07:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:20.855 07:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:20.855 07:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:20.855 07:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:20.855 07:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:20.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:20.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.554 ms 00:38:20.855 00:38:20.855 --- 10.0.0.2 ping statistics --- 00:38:20.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:20.855 rtt min/avg/max/mdev = 0.554/0.554/0.554/0.000 ms 00:38:20.855 07:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:20.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:20.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:38:20.855 00:38:20.855 --- 10.0.0.1 ping statistics --- 00:38:20.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:20.855 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:38:20.855 07:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:20.855 07:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:38:20.855 07:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:20.855 07:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:20.855 07:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:20.855 07:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:20.855 07:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:20.855 07:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:20.855 07:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:20.855 07:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:38:20.855 07:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:20.855 07:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:20.855 07:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:38:20.855 07:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2668359 00:38:20.855 07:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2668359 00:38:20.855 07:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:38:20.855 07:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2668359 ']' 00:38:20.855 07:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:20.855 07:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:20.855 07:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:20.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:20.855 07:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:20.855 07:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:38:20.855 [2024-11-27 07:33:31.281918] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:20.855 [2024-11-27 07:33:31.283046] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:38:20.855 [2024-11-27 07:33:31.283096] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:20.855 [2024-11-27 07:33:31.383113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:20.855 [2024-11-27 07:33:31.436186] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:20.855 [2024-11-27 07:33:31.436246] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:20.855 [2024-11-27 07:33:31.436255] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:20.855 [2024-11-27 07:33:31.436263] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:20.855 [2024-11-27 07:33:31.436270] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:20.855 [2024-11-27 07:33:31.438288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:20.855 [2024-11-27 07:33:31.438565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:20.855 [2024-11-27 07:33:31.438727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:20.855 [2024-11-27 07:33:31.438728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:20.855 [2024-11-27 07:33:31.517219] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:20.855 [2024-11-27 07:33:31.517904] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:20.855 [2024-11-27 07:33:31.518473] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:20.855 [2024-11-27 07:33:31.518920] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:20.855 [2024-11-27 07:33:31.518981] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:21.117 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:21.117 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:38:21.117 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:21.117 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:21.117 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:38:21.117 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:21.117 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:21.379 [2024-11-27 07:33:32.327776] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:21.379 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:21.639 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:38:21.639 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:21.639 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:38:21.640 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:21.901 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:38:21.901 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:22.162 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:38:22.162 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:38:22.423 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:22.423 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:38:22.423 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:22.684 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:38:22.684 07:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:22.945 07:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:38:22.945 07:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:38:23.206 07:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:38:23.206 07:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:38:23.206 07:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:23.467 07:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:38:23.467 07:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:38:23.728 07:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:23.728 [2024-11-27 07:33:34.919752] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:23.989 07:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:38:23.989 07:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:38:24.250 07:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:38:24.821 07:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:38:24.821 07:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:38:24.821 07:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:38:24.821 07:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:38:24.821 07:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:38:24.821 07:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:38:26.734 07:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:38:26.734 07:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:38:26.734 07:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:38:26.734 07:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:38:26.734 07:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:38:26.734 07:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:38:26.734 07:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:38:26.734 [global] 00:38:26.734 thread=1 00:38:26.734 invalidate=1 00:38:26.734 rw=write 00:38:26.734 time_based=1 00:38:26.734 runtime=1 00:38:26.734 ioengine=libaio 00:38:26.734 direct=1 00:38:26.734 bs=4096 00:38:26.734 iodepth=1 00:38:26.734 norandommap=0 00:38:26.734 numjobs=1 00:38:26.734 00:38:26.734 verify_dump=1 00:38:26.734 verify_backlog=512 00:38:26.734 verify_state_save=0 00:38:26.734 do_verify=1 00:38:26.734 verify=crc32c-intel 00:38:26.734 [job0] 00:38:26.734 filename=/dev/nvme0n1 00:38:26.734 [job1] 00:38:26.734 filename=/dev/nvme0n2 00:38:26.734 [job2] 00:38:26.734 filename=/dev/nvme0n3 00:38:26.734 [job3] 00:38:26.734 filename=/dev/nvme0n4 00:38:26.734 Could not set queue depth (nvme0n1) 00:38:26.734 Could not set queue depth (nvme0n2) 00:38:26.734 Could not set queue depth (nvme0n3) 00:38:26.734 Could not set queue depth (nvme0n4) 00:38:26.993 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:26.993 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:26.993 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:26.993 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:26.993 fio-3.35 00:38:26.993 Starting 4 threads 00:38:28.378 00:38:28.378 job0: (groupid=0, jobs=1): err= 0: pid=2669736: Wed Nov 27 07:33:39 2024 00:38:28.378 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:38:28.379 slat (nsec): min=25346, max=44835, avg=26715.44, stdev=3010.06 00:38:28.379 clat (usec): min=627, max=1747, avg=1027.29, stdev=135.66 00:38:28.379 lat (usec): min=653, max=1773, avg=1054.01, stdev=135.70 00:38:28.379 clat percentiles (usec): 00:38:28.379 | 1.00th=[ 701], 5.00th=[ 799], 10.00th=[ 857], 20.00th=[ 922], 00:38:28.379 | 30.00th=[ 971], 40.00th=[ 996], 50.00th=[ 1029], 60.00th=[ 1057], 00:38:28.379 | 70.00th=[ 1090], 80.00th=[ 1139], 90.00th=[ 1188], 95.00th=[ 1237], 00:38:28.379 | 99.00th=[ 1352], 99.50th=[ 1401], 99.90th=[ 1745], 99.95th=[ 1745], 00:38:28.379 | 99.99th=[ 1745] 00:38:28.379 write: IOPS=675, BW=2701KiB/s (2766kB/s)(2704KiB/1001msec); 0 zone resets 00:38:28.379 slat (nsec): min=9756, max=57560, avg=29027.30, stdev=10416.34 00:38:28.379 clat (usec): min=200, max=1108, avg=637.47, stdev=146.76 00:38:28.379 lat (usec): min=210, max=1143, avg=666.50, stdev=149.84 00:38:28.379 clat percentiles (usec): 00:38:28.379 | 1.00th=[ 297], 5.00th=[ 388], 10.00th=[ 433], 20.00th=[ 519], 00:38:28.379 | 30.00th=[ 570], 40.00th=[ 611], 50.00th=[ 644], 60.00th=[ 676], 00:38:28.379 | 70.00th=[ 709], 80.00th=[ 758], 90.00th=[ 824], 95.00th=[ 881], 00:38:28.379 | 99.00th=[ 988], 99.50th=[ 996], 99.90th=[ 1106], 99.95th=[ 1106], 00:38:28.379 | 99.99th=[ 1106] 00:38:28.379 bw ( KiB/s): min= 4096, max= 4096, per=40.67%, avg=4096.00, stdev= 0.00, samples=1 00:38:28.379 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:28.379 lat (usec) : 250=0.08%, 500=9.76%, 750=35.86%, 1000=28.62% 00:38:28.379 lat (msec) : 2=25.67% 00:38:28.379 cpu : usr=1.80%, sys=3.40%, ctx=1191, majf=0, minf=1 00:38:28.379 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:28.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:28.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:28.379 issued rwts: total=512,676,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:28.379 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:28.379 job1: (groupid=0, jobs=1): err= 0: pid=2669752: Wed Nov 27 07:33:39 2024 00:38:28.379 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:38:28.379 slat (nsec): min=25548, max=59294, avg=26772.70, stdev=3174.43 00:38:28.379 clat (usec): min=785, max=41411, avg=1154.06, stdev=1785.43 00:38:28.379 lat (usec): min=812, max=41437, avg=1180.83, stdev=1785.41 00:38:28.379 clat percentiles (usec): 00:38:28.379 | 1.00th=[ 807], 5.00th=[ 881], 10.00th=[ 930], 20.00th=[ 996], 00:38:28.379 | 30.00th=[ 1037], 40.00th=[ 1057], 50.00th=[ 1090], 60.00th=[ 1106], 00:38:28.379 | 70.00th=[ 1139], 80.00th=[ 1156], 90.00th=[ 1188], 95.00th=[ 1221], 00:38:28.379 | 99.00th=[ 1270], 99.50th=[ 1287], 99.90th=[41157], 99.95th=[41157], 00:38:28.379 | 99.99th=[41157] 00:38:28.379 write: IOPS=552, BW=2210KiB/s (2263kB/s)(2212KiB/1001msec); 0 zone resets 00:38:28.379 slat (usec): min=10, max=19822, avg=66.28, stdev=841.70 00:38:28.379 clat (usec): min=237, max=1018, avg=631.81, stdev=130.35 00:38:28.379 lat (usec): min=272, max=20599, avg=698.09, stdev=858.51 00:38:28.379 clat percentiles (usec): 00:38:28.379 | 1.00th=[ 355], 5.00th=[ 388], 10.00th=[ 453], 20.00th=[ 515], 00:38:28.379 | 30.00th=[ 578], 40.00th=[ 611], 50.00th=[ 635], 60.00th=[ 676], 00:38:28.379 | 70.00th=[ 717], 80.00th=[ 742], 90.00th=[ 775], 95.00th=[ 824], 00:38:28.379 | 99.00th=[ 930], 99.50th=[ 979], 99.90th=[ 1020], 99.95th=[ 1020], 00:38:28.379 | 99.99th=[ 1020] 00:38:28.379 bw ( KiB/s): min= 4096, max= 4096, per=40.67%, avg=4096.00, stdev= 0.00, samples=1 00:38:28.379 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:28.379 lat (usec) : 250=0.09%, 500=9.30%, 750=33.05%, 1000=19.81% 00:38:28.379 lat (msec) : 2=37.65%, 50=0.09% 00:38:28.379 cpu : usr=1.40%, sys=3.30%, ctx=1067, majf=0, minf=1 00:38:28.379 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:28.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:28.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:28.379 issued rwts: total=512,553,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:28.379 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:28.379 job2: (groupid=0, jobs=1): err= 0: pid=2669770: Wed Nov 27 07:33:39 2024 00:38:28.379 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:38:28.379 slat (nsec): min=6853, max=62693, avg=28457.93, stdev=3401.41 00:38:28.379 clat (usec): min=281, max=1514, avg=941.17, stdev=142.82 00:38:28.379 lat (usec): min=310, max=1543, avg=969.63, stdev=142.78 00:38:28.379 clat percentiles (usec): 00:38:28.379 | 1.00th=[ 537], 5.00th=[ 676], 10.00th=[ 766], 20.00th=[ 840], 00:38:28.379 | 30.00th=[ 881], 40.00th=[ 922], 50.00th=[ 955], 60.00th=[ 996], 00:38:28.379 | 70.00th=[ 1012], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1156], 00:38:28.379 | 99.00th=[ 1221], 99.50th=[ 1237], 99.90th=[ 1516], 99.95th=[ 1516], 00:38:28.379 | 99.99th=[ 1516] 00:38:28.379 write: IOPS=801, BW=3205KiB/s (3282kB/s)(3208KiB/1001msec); 0 zone resets 00:38:28.379 slat (nsec): min=9477, max=73185, avg=34811.96, stdev=8265.43 00:38:28.379 clat (usec): min=212, max=960, avg=578.92, stdev=130.08 00:38:28.379 lat (usec): min=227, max=996, avg=613.73, stdev=131.86 00:38:28.379 clat percentiles (usec): 00:38:28.379 | 1.00th=[ 260], 5.00th=[ 359], 10.00th=[ 404], 20.00th=[ 465], 00:38:28.379 | 30.00th=[ 506], 40.00th=[ 553], 50.00th=[ 586], 60.00th=[ 619], 00:38:28.379 | 70.00th=[ 652], 80.00th=[ 693], 90.00th=[ 750], 95.00th=[ 775], 00:38:28.379 | 99.00th=[ 840], 99.50th=[ 898], 99.90th=[ 963], 99.95th=[ 963], 00:38:28.379 | 99.99th=[ 963] 00:38:28.379 bw ( KiB/s): min= 4096, max= 4096, per=40.67%, avg=4096.00, stdev= 0.00, samples=1 00:38:28.379 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:28.379 lat (usec) : 250=0.30%, 500=17.43%, 750=40.64%, 1000=27.32% 00:38:28.379 lat (msec) : 2=14.31% 00:38:28.379 cpu : usr=2.30%, sys=6.10%, ctx=1315, majf=0, minf=1 00:38:28.379 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:28.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:28.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:28.379 issued rwts: total=512,802,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:28.379 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:28.379 job3: (groupid=0, jobs=1): err= 0: pid=2669776: Wed Nov 27 07:33:39 2024 00:38:28.379 read: IOPS=410, BW=1644KiB/s (1683kB/s)(1660KiB/1010msec) 00:38:28.379 slat (nsec): min=7033, max=60677, avg=28085.75, stdev=3894.55 00:38:28.379 clat (usec): min=595, max=41995, avg=1609.20, stdev=4869.50 00:38:28.379 lat (usec): min=603, max=42021, avg=1637.29, stdev=4869.29 00:38:28.379 clat percentiles (usec): 00:38:28.379 | 1.00th=[ 758], 5.00th=[ 840], 10.00th=[ 889], 20.00th=[ 947], 00:38:28.379 | 30.00th=[ 979], 40.00th=[ 1004], 50.00th=[ 1029], 60.00th=[ 1057], 00:38:28.379 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1139], 95.00th=[ 1221], 00:38:28.379 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:38:28.379 | 99.99th=[42206] 00:38:28.379 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:38:28.379 slat (nsec): min=9516, max=54413, avg=31266.31, stdev=10734.92 00:38:28.379 clat (usec): min=247, max=985, avg=598.24, stdev=118.52 00:38:28.379 lat (usec): min=259, max=1022, avg=629.51, stdev=122.94 00:38:28.379 clat percentiles (usec): 00:38:28.379 | 1.00th=[ 314], 5.00th=[ 396], 10.00th=[ 445], 20.00th=[ 490], 00:38:28.379 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 635], 00:38:28.379 | 70.00th=[ 676], 80.00th=[ 701], 90.00th=[ 742], 95.00th=[ 775], 00:38:28.379 | 99.00th=[ 824], 99.50th=[ 865], 99.90th=[ 988], 99.95th=[ 988], 00:38:28.379 | 99.99th=[ 988] 00:38:28.379 bw ( KiB/s): min= 4096, max= 4096, per=40.67%, avg=4096.00, stdev= 0.00, samples=1 00:38:28.379 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:28.379 lat (usec) : 250=0.11%, 500=12.30%, 750=38.62%, 1000=21.25% 00:38:28.379 lat (msec) : 2=27.08%, 50=0.65% 00:38:28.379 cpu : usr=1.98%, sys=3.57%, ctx=928, majf=0, minf=1 00:38:28.379 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:28.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:28.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:28.379 issued rwts: total=415,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:28.379 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:28.379 00:38:28.379 Run status group 0 (all jobs): 00:38:28.379 READ: bw=7727KiB/s (7912kB/s), 1644KiB/s-2046KiB/s (1683kB/s-2095kB/s), io=7804KiB (7991kB), run=1001-1010msec 00:38:28.379 WRITE: bw=9.83MiB/s (10.3MB/s), 2028KiB/s-3205KiB/s (2076kB/s-3282kB/s), io=9.93MiB (10.4MB), run=1001-1010msec 00:38:28.379 00:38:28.379 Disk stats (read/write): 00:38:28.379 nvme0n1: ios=512/512, merge=0/0, ticks=1008/314, in_queue=1322, util=96.49% 00:38:28.379 nvme0n2: ios=424/512, merge=0/0, ticks=1429/313, in_queue=1742, util=96.93% 00:38:28.379 nvme0n3: ios=534/528, merge=0/0, ticks=1373/232, in_queue=1605, util=96.61% 00:38:28.379 nvme0n4: ios=432/512, merge=0/0, ticks=1345/245, in_queue=1590, util=96.57% 00:38:28.379 07:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:38:28.379 [global] 00:38:28.379 thread=1 00:38:28.379 invalidate=1 00:38:28.379 rw=randwrite 00:38:28.379 time_based=1 00:38:28.379 runtime=1 00:38:28.379 ioengine=libaio 00:38:28.379 direct=1 00:38:28.379 bs=4096 00:38:28.379 iodepth=1 00:38:28.379 norandommap=0 00:38:28.379 numjobs=1 00:38:28.379 00:38:28.379 verify_dump=1 00:38:28.379 verify_backlog=512 00:38:28.379 verify_state_save=0 00:38:28.379 do_verify=1 00:38:28.379 verify=crc32c-intel 00:38:28.379 [job0] 00:38:28.379 filename=/dev/nvme0n1 00:38:28.379 [job1] 00:38:28.379 filename=/dev/nvme0n2 00:38:28.379 [job2] 00:38:28.379 filename=/dev/nvme0n3 00:38:28.379 [job3] 00:38:28.379 filename=/dev/nvme0n4 00:38:28.379 Could not set queue depth (nvme0n1) 00:38:28.379 Could not set queue depth (nvme0n2) 00:38:28.379 Could not set queue depth (nvme0n3) 00:38:28.379 Could not set queue depth (nvme0n4) 00:38:28.947 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:28.947 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:28.947 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:28.947 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:28.947 fio-3.35 00:38:28.947 Starting 4 threads 00:38:30.328 00:38:30.328 job0: (groupid=0, jobs=1): err= 0: pid=2670202: Wed Nov 27 07:33:41 2024 00:38:30.328 read: IOPS=16, BW=67.1KiB/s (68.7kB/s)(68.0KiB/1014msec) 00:38:30.328 slat (nsec): min=25788, max=26653, avg=26121.71, stdev=234.74 00:38:30.328 clat (usec): min=40816, max=42093, avg=41562.77, stdev=480.39 00:38:30.328 lat (usec): min=40843, max=42119, avg=41588.89, stdev=480.47 00:38:30.328 clat percentiles (usec): 00:38:30.328 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:38:30.329 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:38:30.329 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:38:30.329 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:38:30.329 | 99.99th=[42206] 00:38:30.329 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:38:30.329 slat (nsec): min=8841, max=52959, avg=29433.33, stdev=9007.94 00:38:30.329 clat (usec): min=123, max=3033, avg=562.63, stdev=191.20 00:38:30.329 lat (usec): min=132, max=3065, avg=592.06, stdev=194.32 00:38:30.329 clat percentiles (usec): 00:38:30.329 | 1.00th=[ 141], 5.00th=[ 269], 10.00th=[ 343], 20.00th=[ 424], 00:38:30.329 | 30.00th=[ 498], 40.00th=[ 537], 50.00th=[ 578], 60.00th=[ 619], 00:38:30.329 | 70.00th=[ 644], 80.00th=[ 701], 90.00th=[ 742], 95.00th=[ 799], 00:38:30.329 | 99.00th=[ 857], 99.50th=[ 873], 99.90th=[ 3032], 99.95th=[ 3032], 00:38:30.329 | 99.99th=[ 3032] 00:38:30.329 bw ( KiB/s): min= 4096, max= 4096, per=47.05%, avg=4096.00, stdev= 0.00, samples=1 00:38:30.329 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:30.329 lat (usec) : 250=3.78%, 500=25.71%, 750=58.60%, 1000=8.51% 00:38:30.329 lat (msec) : 4=0.19%, 50=3.21% 00:38:30.329 cpu : usr=1.09%, sys=1.88%, ctx=529, majf=0, minf=1 00:38:30.329 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:30.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:30.329 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:30.329 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:30.329 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:30.329 job1: (groupid=0, jobs=1): err= 0: pid=2670208: Wed Nov 27 07:33:41 2024 00:38:30.329 read: IOPS=16, BW=65.8KiB/s (67.3kB/s)(68.0KiB/1034msec) 00:38:30.329 slat (nsec): min=26135, max=26956, avg=26432.94, stdev=256.69 00:38:30.329 clat (usec): min=1102, max=42058, avg=39441.14, stdev=9884.80 00:38:30.329 lat (usec): min=1129, max=42085, avg=39467.57, stdev=9884.80 00:38:30.329 clat percentiles (usec): 00:38:30.329 | 1.00th=[ 1106], 5.00th=[ 1106], 10.00th=[41157], 20.00th=[41681], 00:38:30.329 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:38:30.329 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:38:30.329 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:38:30.329 | 99.99th=[42206] 00:38:30.329 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:38:30.329 slat (nsec): min=9865, max=52343, avg=31338.72, stdev=8280.46 00:38:30.329 clat (usec): min=280, max=2271, avg=668.85, stdev=161.61 00:38:30.329 lat (usec): min=297, max=2306, avg=700.19, stdev=163.89 00:38:30.329 clat percentiles (usec): 00:38:30.329 | 1.00th=[ 310], 5.00th=[ 416], 10.00th=[ 478], 20.00th=[ 545], 00:38:30.329 | 30.00th=[ 594], 40.00th=[ 635], 50.00th=[ 676], 60.00th=[ 717], 00:38:30.329 | 70.00th=[ 742], 80.00th=[ 775], 90.00th=[ 848], 95.00th=[ 914], 00:38:30.329 | 99.00th=[ 988], 99.50th=[ 1074], 99.90th=[ 2278], 99.95th=[ 2278], 00:38:30.329 | 99.99th=[ 2278] 00:38:30.329 bw ( KiB/s): min= 4096, max= 4096, per=47.05%, avg=4096.00, stdev= 0.00, samples=1 00:38:30.329 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:30.329 lat (usec) : 500=12.29%, 750=56.71%, 1000=27.03% 00:38:30.329 lat (msec) : 2=0.76%, 4=0.19%, 50=3.02% 00:38:30.329 cpu : usr=0.77%, sys=1.55%, ctx=533, majf=0, minf=1 00:38:30.329 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:30.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:30.329 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:30.329 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:30.329 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:30.329 job2: (groupid=0, jobs=1): err= 0: pid=2670214: Wed Nov 27 07:33:41 2024 00:38:30.329 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:38:30.329 slat (nsec): min=25811, max=62723, avg=27305.41, stdev=3960.14 00:38:30.329 clat (usec): min=516, max=1718, avg=968.88, stdev=133.73 00:38:30.329 lat (usec): min=542, max=1745, avg=996.18, stdev=133.40 00:38:30.329 clat percentiles (usec): 00:38:30.329 | 1.00th=[ 578], 5.00th=[ 709], 10.00th=[ 783], 20.00th=[ 889], 00:38:30.329 | 30.00th=[ 938], 40.00th=[ 955], 50.00th=[ 979], 60.00th=[ 996], 00:38:30.329 | 70.00th=[ 1020], 80.00th=[ 1057], 90.00th=[ 1106], 95.00th=[ 1172], 00:38:30.329 | 99.00th=[ 1303], 99.50th=[ 1352], 99.90th=[ 1713], 99.95th=[ 1713], 00:38:30.329 | 99.99th=[ 1713] 00:38:30.329 write: IOPS=722, BW=2889KiB/s (2958kB/s)(2892KiB/1001msec); 0 zone resets 00:38:30.329 slat (nsec): min=10058, max=66984, avg=31675.28, stdev=9188.55 00:38:30.329 clat (usec): min=198, max=1952, avg=629.86, stdev=143.20 00:38:30.329 lat (usec): min=209, max=1971, avg=661.54, stdev=145.48 00:38:30.329 clat percentiles (usec): 00:38:30.329 | 1.00th=[ 289], 5.00th=[ 396], 10.00th=[ 469], 20.00th=[ 515], 00:38:30.329 | 30.00th=[ 562], 40.00th=[ 603], 50.00th=[ 627], 60.00th=[ 668], 00:38:30.329 | 70.00th=[ 701], 80.00th=[ 734], 90.00th=[ 799], 95.00th=[ 848], 00:38:30.329 | 99.00th=[ 955], 99.50th=[ 1004], 99.90th=[ 1958], 99.95th=[ 1958], 00:38:30.329 | 99.99th=[ 1958] 00:38:30.329 bw ( KiB/s): min= 4096, max= 4096, per=47.05%, avg=4096.00, stdev= 0.00, samples=1 00:38:30.329 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:30.329 lat (usec) : 250=0.16%, 500=8.91%, 750=43.00%, 1000=31.74% 00:38:30.329 lat (msec) : 2=16.19% 00:38:30.329 cpu : usr=1.80%, sys=3.90%, ctx=1237, majf=0, minf=1 00:38:30.329 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:30.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:30.329 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:30.329 issued rwts: total=512,723,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:30.329 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:30.329 job3: (groupid=0, jobs=1): err= 0: pid=2670220: Wed Nov 27 07:33:41 2024 00:38:30.329 read: IOPS=17, BW=69.4KiB/s (71.0kB/s)(72.0KiB/1038msec) 00:38:30.329 slat (nsec): min=26866, max=28009, avg=27281.72, stdev=347.90 00:38:30.329 clat (usec): min=1165, max=42079, avg=39596.34, stdev=9595.15 00:38:30.329 lat (usec): min=1192, max=42106, avg=39623.62, stdev=9595.11 00:38:30.329 clat percentiles (usec): 00:38:30.329 | 1.00th=[ 1172], 5.00th=[ 1172], 10.00th=[41157], 20.00th=[41681], 00:38:30.329 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:38:30.329 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:38:30.329 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:38:30.329 | 99.99th=[42206] 00:38:30.329 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:38:30.329 slat (nsec): min=8845, max=52834, avg=28165.09, stdev=10255.22 00:38:30.329 clat (usec): min=125, max=896, avg=598.99, stdev=150.35 00:38:30.329 lat (usec): min=134, max=930, avg=627.15, stdev=155.53 00:38:30.329 clat percentiles (usec): 00:38:30.329 | 1.00th=[ 200], 5.00th=[ 322], 10.00th=[ 383], 20.00th=[ 469], 00:38:30.329 | 30.00th=[ 537], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 660], 00:38:30.329 | 70.00th=[ 693], 80.00th=[ 734], 90.00th=[ 766], 95.00th=[ 799], 00:38:30.329 | 99.00th=[ 865], 99.50th=[ 881], 99.90th=[ 898], 99.95th=[ 898], 00:38:30.329 | 99.99th=[ 898] 00:38:30.329 bw ( KiB/s): min= 4096, max= 4096, per=47.05%, avg=4096.00, stdev= 0.00, samples=1 00:38:30.329 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:30.329 lat (usec) : 250=2.08%, 500=22.08%, 750=57.74%, 1000=14.72% 00:38:30.329 lat (msec) : 2=0.19%, 50=3.21% 00:38:30.329 cpu : usr=0.39%, sys=2.41%, ctx=530, majf=0, minf=1 00:38:30.329 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:30.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:30.329 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:30.329 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:30.329 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:30.329 00:38:30.329 Run status group 0 (all jobs): 00:38:30.329 READ: bw=2173KiB/s (2226kB/s), 65.8KiB/s-2046KiB/s (67.3kB/s-2095kB/s), io=2256KiB (2310kB), run=1001-1038msec 00:38:30.329 WRITE: bw=8705KiB/s (8914kB/s), 1973KiB/s-2889KiB/s (2020kB/s-2958kB/s), io=9036KiB (9253kB), run=1001-1038msec 00:38:30.329 00:38:30.329 Disk stats (read/write): 00:38:30.329 nvme0n1: ios=62/512, merge=0/0, ticks=558/201, in_queue=759, util=87.78% 00:38:30.329 nvme0n2: ios=35/512, merge=0/0, ticks=1427/330, in_queue=1757, util=98.06% 00:38:30.329 nvme0n3: ios=508/512, merge=0/0, ticks=1362/310, in_queue=1672, util=97.78% 00:38:30.329 nvme0n4: ios=13/512, merge=0/0, ticks=503/241, in_queue=744, util=89.52% 00:38:30.329 07:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:38:30.329 [global] 00:38:30.329 thread=1 00:38:30.329 invalidate=1 00:38:30.329 rw=write 00:38:30.329 time_based=1 00:38:30.329 runtime=1 00:38:30.329 ioengine=libaio 00:38:30.329 direct=1 00:38:30.329 bs=4096 00:38:30.329 iodepth=128 00:38:30.329 norandommap=0 00:38:30.329 numjobs=1 00:38:30.329 00:38:30.329 verify_dump=1 00:38:30.329 verify_backlog=512 00:38:30.329 verify_state_save=0 00:38:30.329 do_verify=1 00:38:30.329 verify=crc32c-intel 00:38:30.329 [job0] 00:38:30.329 filename=/dev/nvme0n1 00:38:30.329 [job1] 00:38:30.329 filename=/dev/nvme0n2 00:38:30.329 [job2] 00:38:30.329 filename=/dev/nvme0n3 00:38:30.329 [job3] 00:38:30.329 filename=/dev/nvme0n4 00:38:30.329 Could not set queue depth (nvme0n1) 00:38:30.329 Could not set queue depth (nvme0n2) 00:38:30.329 Could not set queue depth (nvme0n3) 00:38:30.329 Could not set queue depth (nvme0n4) 00:38:30.588 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:30.588 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:30.588 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:30.588 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:30.588 fio-3.35 00:38:30.588 Starting 4 threads 00:38:31.974 00:38:31.975 job0: (groupid=0, jobs=1): err= 0: pid=2670711: Wed Nov 27 07:33:42 2024 00:38:31.975 read: IOPS=6609, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1007msec) 00:38:31.975 slat (nsec): min=887, max=8233.5k, avg=67883.79, stdev=486937.20 00:38:31.975 clat (usec): min=1874, max=29773, avg=9139.83, stdev=3646.92 00:38:31.975 lat (usec): min=1890, max=29782, avg=9207.71, stdev=3686.85 00:38:31.975 clat percentiles (usec): 00:38:31.975 | 1.00th=[ 3228], 5.00th=[ 5145], 10.00th=[ 5866], 20.00th=[ 6849], 00:38:31.975 | 30.00th=[ 7242], 40.00th=[ 7504], 50.00th=[ 7767], 60.00th=[ 8455], 00:38:31.975 | 70.00th=[ 9896], 80.00th=[12125], 90.00th=[15139], 95.00th=[16581], 00:38:31.975 | 99.00th=[20579], 99.50th=[23725], 99.90th=[28705], 99.95th=[29754], 00:38:31.975 | 99.99th=[29754] 00:38:31.975 write: IOPS=6885, BW=26.9MiB/s (28.2MB/s)(27.1MiB/1007msec); 0 zone resets 00:38:31.975 slat (nsec): min=1540, max=7851.4k, avg=71706.32, stdev=458363.15 00:38:31.975 clat (usec): min=540, max=33837, avg=9624.77, stdev=6292.62 00:38:31.975 lat (usec): min=549, max=33842, avg=9696.48, stdev=6340.38 00:38:31.975 clat percentiles (usec): 00:38:31.975 | 1.00th=[ 1434], 5.00th=[ 3916], 10.00th=[ 4948], 20.00th=[ 5997], 00:38:31.975 | 30.00th=[ 6587], 40.00th=[ 6980], 50.00th=[ 7242], 60.00th=[ 8160], 00:38:31.975 | 70.00th=[10290], 80.00th=[11731], 90.00th=[16319], 95.00th=[26870], 00:38:31.975 | 99.00th=[31851], 99.50th=[32375], 99.90th=[33817], 99.95th=[33817], 00:38:31.975 | 99.99th=[33817] 00:38:31.975 bw ( KiB/s): min=24536, max=29920, per=26.22%, avg=27228.00, stdev=3807.06, samples=2 00:38:31.975 iops : min= 6134, max= 7480, avg=6807.00, stdev=951.77, samples=2 00:38:31.975 lat (usec) : 750=0.02%, 1000=0.07% 00:38:31.975 lat (msec) : 2=0.89%, 4=3.21%, 10=64.92%, 20=25.70%, 50=5.19% 00:38:31.975 cpu : usr=4.27%, sys=6.76%, ctx=604, majf=0, minf=1 00:38:31.975 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:38:31.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.975 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:31.975 issued rwts: total=6656,6934,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:31.975 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:31.975 job1: (groupid=0, jobs=1): err= 0: pid=2670713: Wed Nov 27 07:33:42 2024 00:38:31.975 read: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec) 00:38:31.975 slat (nsec): min=895, max=13391k, avg=76970.45, stdev=573771.30 00:38:31.975 clat (usec): min=2162, max=25997, avg=9604.06, stdev=3573.85 00:38:31.975 lat (usec): min=2169, max=28313, avg=9681.03, stdev=3621.71 00:38:31.975 clat percentiles (usec): 00:38:31.975 | 1.00th=[ 5211], 5.00th=[ 5866], 10.00th=[ 6390], 20.00th=[ 7046], 00:38:31.975 | 30.00th=[ 7439], 40.00th=[ 7963], 50.00th=[ 8356], 60.00th=[ 9110], 00:38:31.975 | 70.00th=[10290], 80.00th=[12387], 90.00th=[14877], 95.00th=[16057], 00:38:31.975 | 99.00th=[22676], 99.50th=[24773], 99.90th=[25035], 99.95th=[26084], 00:38:31.975 | 99.99th=[26084] 00:38:31.975 write: IOPS=6732, BW=26.3MiB/s (27.6MB/s)(26.4MiB/1004msec); 0 zone resets 00:38:31.975 slat (nsec): min=1522, max=10755k, avg=66840.29, stdev=468653.82 00:38:31.975 clat (usec): min=758, max=31347, avg=9389.66, stdev=4267.31 00:38:31.975 lat (usec): min=767, max=31663, avg=9456.50, stdev=4299.05 00:38:31.975 clat percentiles (usec): 00:38:31.975 | 1.00th=[ 2212], 5.00th=[ 4621], 10.00th=[ 6194], 20.00th=[ 6718], 00:38:31.975 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7963], 60.00th=[ 8717], 00:38:31.975 | 70.00th=[10552], 80.00th=[12649], 90.00th=[15008], 95.00th=[16319], 00:38:31.975 | 99.00th=[25297], 99.50th=[29754], 99.90th=[31327], 99.95th=[31327], 00:38:31.975 | 99.99th=[31327] 00:38:31.975 bw ( KiB/s): min=26176, max=27072, per=25.64%, avg=26624.00, stdev=633.57, samples=2 00:38:31.975 iops : min= 6544, max= 6768, avg=6656.00, stdev=158.39, samples=2 00:38:31.975 lat (usec) : 1000=0.03% 00:38:31.975 lat (msec) : 2=0.31%, 4=1.14%, 10=66.83%, 20=29.31%, 50=2.38% 00:38:31.975 cpu : usr=5.18%, sys=5.68%, ctx=616, majf=0, minf=2 00:38:31.975 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:38:31.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.975 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:31.975 issued rwts: total=6656,6759,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:31.975 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:31.975 job2: (groupid=0, jobs=1): err= 0: pid=2670717: Wed Nov 27 07:33:42 2024 00:38:31.975 read: IOPS=7215, BW=28.2MiB/s (29.6MB/s)(28.4MiB/1006msec) 00:38:31.975 slat (nsec): min=929, max=9416.1k, avg=66166.47, stdev=530063.36 00:38:31.975 clat (usec): min=2736, max=20828, avg=8438.65, stdev=2274.78 00:38:31.975 lat (usec): min=2747, max=20833, avg=8504.82, stdev=2312.19 00:38:31.975 clat percentiles (usec): 00:38:31.975 | 1.00th=[ 3916], 5.00th=[ 5669], 10.00th=[ 6128], 20.00th=[ 6718], 00:38:31.975 | 30.00th=[ 7111], 40.00th=[ 7570], 50.00th=[ 7898], 60.00th=[ 8455], 00:38:31.975 | 70.00th=[ 9110], 80.00th=[10028], 90.00th=[11994], 95.00th=[13042], 00:38:31.975 | 99.00th=[14484], 99.50th=[15270], 99.90th=[20841], 99.95th=[20841], 00:38:31.975 | 99.99th=[20841] 00:38:31.975 write: IOPS=7634, BW=29.8MiB/s (31.3MB/s)(30.0MiB/1006msec); 0 zone resets 00:38:31.975 slat (nsec): min=1663, max=10582k, avg=62805.56, stdev=448591.78 00:38:31.975 clat (usec): min=1170, max=63390, avg=8638.35, stdev=6399.28 00:38:31.975 lat (usec): min=1182, max=64270, avg=8701.15, stdev=6436.34 00:38:31.975 clat percentiles (usec): 00:38:31.975 | 1.00th=[ 3064], 5.00th=[ 4621], 10.00th=[ 5014], 20.00th=[ 5800], 00:38:31.975 | 30.00th=[ 6652], 40.00th=[ 7373], 50.00th=[ 7701], 60.00th=[ 7898], 00:38:31.975 | 70.00th=[ 8160], 80.00th=[ 9503], 90.00th=[11863], 95.00th=[12780], 00:38:31.975 | 99.00th=[50594], 99.50th=[56886], 99.90th=[63177], 99.95th=[63177], 00:38:31.975 | 99.99th=[63177] 00:38:31.975 bw ( KiB/s): min=28729, max=32480, per=29.47%, avg=30604.50, stdev=2652.36, samples=2 00:38:31.975 iops : min= 7182, max= 8120, avg=7651.00, stdev=663.27, samples=2 00:38:31.975 lat (msec) : 2=0.09%, 4=1.67%, 10=79.06%, 20=17.79%, 50=0.86% 00:38:31.975 lat (msec) : 100=0.53% 00:38:31.975 cpu : usr=6.67%, sys=5.87%, ctx=577, majf=0, minf=1 00:38:31.975 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:38:31.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.975 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:31.975 issued rwts: total=7259,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:31.975 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:31.975 job3: (groupid=0, jobs=1): err= 0: pid=2670719: Wed Nov 27 07:33:42 2024 00:38:31.975 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:38:31.975 slat (nsec): min=972, max=7291.4k, avg=81540.28, stdev=538476.22 00:38:31.975 clat (usec): min=5366, max=22659, avg=10629.15, stdev=2741.16 00:38:31.975 lat (usec): min=5371, max=23688, avg=10710.69, stdev=2787.74 00:38:31.975 clat percentiles (usec): 00:38:31.975 | 1.00th=[ 6259], 5.00th=[ 7177], 10.00th=[ 7963], 20.00th=[ 8586], 00:38:31.975 | 30.00th=[ 8848], 40.00th=[ 9634], 50.00th=[10159], 60.00th=[10552], 00:38:31.975 | 70.00th=[10945], 80.00th=[12649], 90.00th=[14484], 95.00th=[15401], 00:38:31.975 | 99.00th=[19268], 99.50th=[21890], 99.90th=[21890], 99.95th=[21890], 00:38:31.975 | 99.99th=[22676] 00:38:31.975 write: IOPS=4753, BW=18.6MiB/s (19.5MB/s)(18.6MiB/1003msec); 0 zone resets 00:38:31.975 slat (nsec): min=1672, max=42065k, avg=122142.35, stdev=1478213.63 00:38:31.975 clat (usec): min=649, max=205560, avg=12988.36, stdev=15236.60 00:38:31.975 lat (usec): min=658, max=205569, avg=13110.50, stdev=15502.20 00:38:31.975 clat percentiles (msec): 00:38:31.975 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 8], 00:38:31.975 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 10], 60.00th=[ 10], 00:38:31.975 | 70.00th=[ 12], 80.00th=[ 15], 90.00th=[ 21], 95.00th=[ 42], 00:38:31.975 | 99.00th=[ 83], 99.50th=[ 125], 99.90th=[ 207], 99.95th=[ 207], 00:38:31.975 | 99.99th=[ 207] 00:38:31.975 bw ( KiB/s): min=16384, max=20744, per=17.88%, avg=18564.00, stdev=3082.99, samples=2 00:38:31.975 iops : min= 4096, max= 5186, avg=4641.00, stdev=770.75, samples=2 00:38:31.975 lat (usec) : 750=0.03%, 1000=0.01% 00:38:31.975 lat (msec) : 4=1.29%, 10=52.54%, 20=39.91%, 50=5.54%, 100=0.34% 00:38:31.975 lat (msec) : 250=0.34% 00:38:31.975 cpu : usr=3.29%, sys=4.89%, ctx=413, majf=0, minf=2 00:38:31.975 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:38:31.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.975 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:31.975 issued rwts: total=4608,4768,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:31.975 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:31.975 00:38:31.975 Run status group 0 (all jobs): 00:38:31.975 READ: bw=97.7MiB/s (102MB/s), 17.9MiB/s-28.2MiB/s (18.8MB/s-29.6MB/s), io=98.4MiB (103MB), run=1003-1007msec 00:38:31.975 WRITE: bw=101MiB/s (106MB/s), 18.6MiB/s-29.8MiB/s (19.5MB/s-31.3MB/s), io=102MiB (107MB), run=1003-1007msec 00:38:31.975 00:38:31.975 Disk stats (read/write): 00:38:31.975 nvme0n1: ios=5284/5632, merge=0/0, ticks=27376/31860, in_queue=59236, util=95.79% 00:38:31.975 nvme0n2: ios=5368/5632, merge=0/0, ticks=36827/37122, in_queue=73949, util=87.76% 00:38:31.975 nvme0n3: ios=6144/6295, merge=0/0, ticks=48428/52778, in_queue=101206, util=88.27% 00:38:31.975 nvme0n4: ios=3622/3759, merge=0/0, ticks=21826/21627, in_queue=43453, util=99.89% 00:38:31.975 07:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:38:31.975 [global] 00:38:31.975 thread=1 00:38:31.975 invalidate=1 00:38:31.975 rw=randwrite 00:38:31.975 time_based=1 00:38:31.975 runtime=1 00:38:31.975 ioengine=libaio 00:38:31.975 direct=1 00:38:31.975 bs=4096 00:38:31.975 iodepth=128 00:38:31.975 norandommap=0 00:38:31.975 numjobs=1 00:38:31.975 00:38:31.975 verify_dump=1 00:38:31.975 verify_backlog=512 00:38:31.975 verify_state_save=0 00:38:31.975 do_verify=1 00:38:31.975 verify=crc32c-intel 00:38:31.975 [job0] 00:38:31.975 filename=/dev/nvme0n1 00:38:31.975 [job1] 00:38:31.975 filename=/dev/nvme0n2 00:38:31.975 [job2] 00:38:31.975 filename=/dev/nvme0n3 00:38:31.975 [job3] 00:38:31.975 filename=/dev/nvme0n4 00:38:31.976 Could not set queue depth (nvme0n1) 00:38:31.976 Could not set queue depth (nvme0n2) 00:38:31.976 Could not set queue depth (nvme0n3) 00:38:31.976 Could not set queue depth (nvme0n4) 00:38:32.235 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:32.235 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:32.235 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:32.235 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:32.235 fio-3.35 00:38:32.235 Starting 4 threads 00:38:33.648 00:38:33.648 job0: (groupid=0, jobs=1): err= 0: pid=2671233: Wed Nov 27 07:33:44 2024 00:38:33.648 read: IOPS=3667, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1007msec) 00:38:33.648 slat (nsec): min=883, max=14571k, avg=133522.12, stdev=837636.14 00:38:33.648 clat (usec): min=3301, max=40487, avg=16517.86, stdev=8450.16 00:38:33.648 lat (usec): min=3304, max=40492, avg=16651.38, stdev=8489.50 00:38:33.648 clat percentiles (usec): 00:38:33.648 | 1.00th=[ 5211], 5.00th=[ 6456], 10.00th=[ 6980], 20.00th=[ 8979], 00:38:33.648 | 30.00th=[ 9503], 40.00th=[12649], 50.00th=[15139], 60.00th=[17433], 00:38:33.648 | 70.00th=[19268], 80.00th=[22938], 90.00th=[29230], 95.00th=[33424], 00:38:33.648 | 99.00th=[38536], 99.50th=[40633], 99.90th=[40633], 99.95th=[40633], 00:38:33.648 | 99.99th=[40633] 00:38:33.648 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:38:33.648 slat (nsec): min=1495, max=15880k, avg=119081.01, stdev=720789.22 00:38:33.648 clat (usec): min=1193, max=43093, avg=16273.53, stdev=8620.11 00:38:33.648 lat (usec): min=1218, max=43097, avg=16392.61, stdev=8668.00 00:38:33.648 clat percentiles (usec): 00:38:33.648 | 1.00th=[ 4686], 5.00th=[ 5932], 10.00th=[ 6915], 20.00th=[ 8225], 00:38:33.648 | 30.00th=[ 8979], 40.00th=[11994], 50.00th=[15008], 60.00th=[17171], 00:38:33.648 | 70.00th=[20579], 80.00th=[24773], 90.00th=[28705], 95.00th=[32900], 00:38:33.648 | 99.00th=[38536], 99.50th=[40633], 99.90th=[43254], 99.95th=[43254], 00:38:33.648 | 99.99th=[43254] 00:38:33.648 bw ( KiB/s): min=12904, max=19712, per=16.90%, avg=16308.00, stdev=4813.98, samples=2 00:38:33.648 iops : min= 3226, max= 4928, avg=4077.00, stdev=1203.50, samples=2 00:38:33.648 lat (msec) : 2=0.13%, 4=0.18%, 10=34.43%, 20=35.19%, 50=30.07% 00:38:33.648 cpu : usr=2.49%, sys=3.88%, ctx=354, majf=0, minf=2 00:38:33.648 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:38:33.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:33.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:33.648 issued rwts: total=3693,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:33.648 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:33.648 job1: (groupid=0, jobs=1): err= 0: pid=2671234: Wed Nov 27 07:33:44 2024 00:38:33.648 read: IOPS=9161, BW=35.8MiB/s (37.5MB/s)(36.0MiB/1006msec) 00:38:33.648 slat (nsec): min=899, max=6570.6k, avg=53265.56, stdev=392033.81 00:38:33.648 clat (usec): min=2860, max=26597, avg=7133.10, stdev=2044.67 00:38:33.648 lat (usec): min=2864, max=26606, avg=7186.37, stdev=2068.67 00:38:33.648 clat percentiles (usec): 00:38:33.648 | 1.00th=[ 3687], 5.00th=[ 4621], 10.00th=[ 5080], 20.00th=[ 5604], 00:38:33.648 | 30.00th=[ 6128], 40.00th=[ 6521], 50.00th=[ 6849], 60.00th=[ 7111], 00:38:33.648 | 70.00th=[ 7767], 80.00th=[ 8455], 90.00th=[ 9503], 95.00th=[10421], 00:38:33.648 | 99.00th=[13829], 99.50th=[15401], 99.90th=[26608], 99.95th=[26608], 00:38:33.648 | 99.99th=[26608] 00:38:33.648 write: IOPS=9562, BW=37.4MiB/s (39.2MB/s)(37.6MiB/1006msec); 0 zone resets 00:38:33.648 slat (nsec): min=1490, max=6367.7k, avg=48999.34, stdev=343550.26 00:38:33.648 clat (usec): min=1223, max=18918, avg=6434.07, stdev=2057.45 00:38:33.648 lat (usec): min=1233, max=18927, avg=6483.07, stdev=2069.09 00:38:33.648 clat percentiles (usec): 00:38:33.648 | 1.00th=[ 2900], 5.00th=[ 3556], 10.00th=[ 3949], 20.00th=[ 4752], 00:38:33.648 | 30.00th=[ 5538], 40.00th=[ 5932], 50.00th=[ 6194], 60.00th=[ 6783], 00:38:33.648 | 70.00th=[ 7242], 80.00th=[ 7635], 90.00th=[ 8586], 95.00th=[10028], 00:38:33.648 | 99.00th=[13566], 99.50th=[14746], 99.90th=[15139], 99.95th=[15139], 00:38:33.648 | 99.99th=[19006] 00:38:33.648 bw ( KiB/s): min=36864, max=39072, per=39.35%, avg=37968.00, stdev=1561.29, samples=2 00:38:33.648 iops : min= 9216, max= 9768, avg=9492.00, stdev=390.32, samples=2 00:38:33.648 lat (msec) : 2=0.15%, 4=6.37%, 10=87.61%, 20=5.71%, 50=0.16% 00:38:33.648 cpu : usr=4.68%, sys=8.46%, ctx=654, majf=0, minf=2 00:38:33.648 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:38:33.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:33.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:33.648 issued rwts: total=9216,9620,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:33.648 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:33.648 job2: (groupid=0, jobs=1): err= 0: pid=2671239: Wed Nov 27 07:33:44 2024 00:38:33.648 read: IOPS=5663, BW=22.1MiB/s (23.2MB/s)(22.2MiB/1003msec) 00:38:33.648 slat (nsec): min=939, max=8778.9k, avg=84887.79, stdev=629456.53 00:38:33.648 clat (usec): min=2480, max=24296, avg=11311.25, stdev=3649.20 00:38:33.648 lat (usec): min=2485, max=24790, avg=11396.14, stdev=3694.79 00:38:33.648 clat percentiles (usec): 00:38:33.648 | 1.00th=[ 4948], 5.00th=[ 6718], 10.00th=[ 6980], 20.00th=[ 8094], 00:38:33.648 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[10028], 60.00th=[12780], 00:38:33.648 | 70.00th=[13960], 80.00th=[14746], 90.00th=[16188], 95.00th=[16909], 00:38:33.648 | 99.00th=[19792], 99.50th=[21365], 99.90th=[22676], 99.95th=[23987], 00:38:33.648 | 99.99th=[24249] 00:38:33.648 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:38:33.648 slat (nsec): min=1595, max=9249.3k, avg=77151.07, stdev=514669.98 00:38:33.648 clat (usec): min=947, max=38191, avg=10254.83, stdev=5232.21 00:38:33.648 lat (usec): min=967, max=38193, avg=10331.98, stdev=5279.65 00:38:33.648 clat percentiles (usec): 00:38:33.648 | 1.00th=[ 3064], 5.00th=[ 4424], 10.00th=[ 5735], 20.00th=[ 7177], 00:38:33.648 | 30.00th=[ 7767], 40.00th=[ 8225], 50.00th=[ 9241], 60.00th=[10159], 00:38:33.648 | 70.00th=[11731], 80.00th=[12780], 90.00th=[14222], 95.00th=[17433], 00:38:33.648 | 99.00th=[35390], 99.50th=[36963], 99.90th=[38011], 99.95th=[38011], 00:38:33.648 | 99.99th=[38011] 00:38:33.648 bw ( KiB/s): min=23992, max=24528, per=25.14%, avg=24260.00, stdev=379.01, samples=2 00:38:33.648 iops : min= 5998, max= 6132, avg=6065.00, stdev=94.75, samples=2 00:38:33.648 lat (usec) : 1000=0.02% 00:38:33.648 lat (msec) : 2=0.02%, 4=2.20%, 10=51.58%, 20=43.83%, 50=2.36% 00:38:33.648 cpu : usr=5.29%, sys=5.09%, ctx=414, majf=0, minf=1 00:38:33.648 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:38:33.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:33.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:33.648 issued rwts: total=5680,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:33.648 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:33.648 job3: (groupid=0, jobs=1): err= 0: pid=2671240: Wed Nov 27 07:33:44 2024 00:38:33.648 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:38:33.648 slat (nsec): min=923, max=11279k, avg=123655.03, stdev=757173.72 00:38:33.648 clat (usec): min=1977, max=32801, avg=16071.60, stdev=5062.67 00:38:33.648 lat (usec): min=2017, max=32827, avg=16195.26, stdev=5119.17 00:38:33.648 clat percentiles (usec): 00:38:33.648 | 1.00th=[ 4293], 5.00th=[ 8848], 10.00th=[ 9765], 20.00th=[10814], 00:38:33.648 | 30.00th=[12911], 40.00th=[15139], 50.00th=[15926], 60.00th=[17171], 00:38:33.648 | 70.00th=[18744], 80.00th=[21365], 90.00th=[22676], 95.00th=[24249], 00:38:33.648 | 99.00th=[26346], 99.50th=[27132], 99.90th=[29754], 99.95th=[31589], 00:38:33.648 | 99.99th=[32900] 00:38:33.648 write: IOPS=4405, BW=17.2MiB/s (18.0MB/s)(17.3MiB/1006msec); 0 zone resets 00:38:33.648 slat (nsec): min=1623, max=16090k, avg=106082.67, stdev=685813.81 00:38:33.648 clat (usec): min=1204, max=41265, avg=13939.43, stdev=5371.72 00:38:33.648 lat (usec): min=1214, max=41297, avg=14045.51, stdev=5413.92 00:38:33.648 clat percentiles (usec): 00:38:33.648 | 1.00th=[ 5800], 5.00th=[ 7308], 10.00th=[ 7701], 20.00th=[ 9634], 00:38:33.648 | 30.00th=[10683], 40.00th=[11469], 50.00th=[13042], 60.00th=[14746], 00:38:33.648 | 70.00th=[16450], 80.00th=[17957], 90.00th=[20055], 95.00th=[21627], 00:38:33.648 | 99.00th=[32900], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:38:33.648 | 99.99th=[41157] 00:38:33.648 bw ( KiB/s): min=16384, max=18048, per=17.84%, avg=17216.00, stdev=1176.63, samples=2 00:38:33.648 iops : min= 4096, max= 4512, avg=4304.00, stdev=294.16, samples=2 00:38:33.648 lat (msec) : 2=0.04%, 4=0.50%, 10=18.59%, 20=63.74%, 50=17.13% 00:38:33.648 cpu : usr=3.58%, sys=4.28%, ctx=352, majf=0, minf=1 00:38:33.648 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:38:33.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:33.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:33.648 issued rwts: total=4096,4432,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:33.648 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:33.648 00:38:33.648 Run status group 0 (all jobs): 00:38:33.648 READ: bw=88.0MiB/s (92.3MB/s), 14.3MiB/s-35.8MiB/s (15.0MB/s-37.5MB/s), io=88.6MiB (92.9MB), run=1003-1007msec 00:38:33.648 WRITE: bw=94.2MiB/s (98.8MB/s), 15.9MiB/s-37.4MiB/s (16.7MB/s-39.2MB/s), io=94.9MiB (99.5MB), run=1003-1007msec 00:38:33.648 00:38:33.648 Disk stats (read/write): 00:38:33.648 nvme0n1: ios=3026/3072, merge=0/0, ticks=14994/15721, in_queue=30715, util=82.26% 00:38:33.648 nvme0n2: ios=7212/7591, merge=0/0, ticks=40859/37402, in_queue=78261, util=98.76% 00:38:33.648 nvme0n3: ios=4134/4607, merge=0/0, ticks=26773/27793, in_queue=54566, util=94.46% 00:38:33.648 nvme0n4: ios=2963/3072, merge=0/0, ticks=16633/15488, in_queue=32121, util=90.00% 00:38:33.648 07:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:38:33.648 07:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2671569 00:38:33.649 07:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:38:33.649 07:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:38:33.649 [global] 00:38:33.649 thread=1 00:38:33.649 invalidate=1 00:38:33.649 rw=read 00:38:33.649 time_based=1 00:38:33.649 runtime=10 00:38:33.649 ioengine=libaio 00:38:33.649 direct=1 00:38:33.649 bs=4096 00:38:33.649 iodepth=1 00:38:33.649 norandommap=1 00:38:33.649 numjobs=1 00:38:33.649 00:38:33.649 [job0] 00:38:33.649 filename=/dev/nvme0n1 00:38:33.649 [job1] 00:38:33.649 filename=/dev/nvme0n2 00:38:33.649 [job2] 00:38:33.649 filename=/dev/nvme0n3 00:38:33.649 [job3] 00:38:33.649 filename=/dev/nvme0n4 00:38:33.649 Could not set queue depth (nvme0n1) 00:38:33.649 Could not set queue depth (nvme0n2) 00:38:33.649 Could not set queue depth (nvme0n3) 00:38:33.649 Could not set queue depth (nvme0n4) 00:38:33.913 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:33.913 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:33.913 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:33.913 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:33.913 fio-3.35 00:38:33.913 Starting 4 threads 00:38:36.451 07:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:38:36.711 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=9703424, buflen=4096 00:38:36.711 fio: pid=2671759, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:38:36.711 07:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:38:36.711 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=14204928, buflen=4096 00:38:36.711 fio: pid=2671757, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:38:36.711 07:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:38:36.711 07:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:38:36.971 07:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:38:36.971 07:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:38:36.971 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=9613312, buflen=4096 00:38:36.971 fio: pid=2671755, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:38:37.231 07:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:38:37.231 07:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:38:37.231 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=10153984, buflen=4096 00:38:37.231 fio: pid=2671756, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:38:37.231 00:38:37.231 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2671755: Wed Nov 27 07:33:48 2024 00:38:37.232 read: IOPS=781, BW=3125KiB/s (3200kB/s)(9388KiB/3004msec) 00:38:37.232 slat (usec): min=7, max=21800, avg=39.85, stdev=515.49 00:38:37.232 clat (usec): min=545, max=42262, avg=1224.20, stdev=2631.21 00:38:37.232 lat (usec): min=570, max=42288, avg=1264.06, stdev=2679.43 00:38:37.232 clat percentiles (usec): 00:38:37.232 | 1.00th=[ 799], 5.00th=[ 873], 10.00th=[ 930], 20.00th=[ 988], 00:38:37.232 | 30.00th=[ 1020], 40.00th=[ 1045], 50.00th=[ 1057], 60.00th=[ 1090], 00:38:37.232 | 70.00th=[ 1106], 80.00th=[ 1123], 90.00th=[ 1156], 95.00th=[ 1188], 00:38:37.232 | 99.00th=[ 1254], 99.50th=[ 1532], 99.90th=[42206], 99.95th=[42206], 00:38:37.232 | 99.99th=[42206] 00:38:37.232 bw ( KiB/s): min= 2920, max= 3696, per=26.25%, avg=3528.00, stdev=340.12, samples=5 00:38:37.232 iops : min= 730, max= 924, avg=882.00, stdev=85.03, samples=5 00:38:37.232 lat (usec) : 750=0.38%, 1000=23.72% 00:38:37.232 lat (msec) : 2=75.43%, 50=0.43% 00:38:37.232 cpu : usr=0.63%, sys=2.56%, ctx=2351, majf=0, minf=1 00:38:37.232 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:37.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:37.232 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:37.232 issued rwts: total=2348,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:37.232 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:37.232 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2671756: Wed Nov 27 07:33:48 2024 00:38:37.232 read: IOPS=781, BW=3124KiB/s (3199kB/s)(9916KiB/3174msec) 00:38:37.232 slat (usec): min=6, max=15839, avg=36.41, stdev=397.74 00:38:37.232 clat (usec): min=460, max=42231, avg=1226.88, stdev=2833.60 00:38:37.232 lat (usec): min=467, max=42239, avg=1263.29, stdev=2861.34 00:38:37.232 clat percentiles (usec): 00:38:37.232 | 1.00th=[ 660], 5.00th=[ 807], 10.00th=[ 865], 20.00th=[ 955], 00:38:37.232 | 30.00th=[ 996], 40.00th=[ 1029], 50.00th=[ 1045], 60.00th=[ 1074], 00:38:37.232 | 70.00th=[ 1090], 80.00th=[ 1123], 90.00th=[ 1156], 95.00th=[ 1188], 00:38:37.232 | 99.00th=[ 1254], 99.50th=[ 6063], 99.90th=[42206], 99.95th=[42206], 00:38:37.232 | 99.99th=[42206] 00:38:37.232 bw ( KiB/s): min= 1431, max= 3760, per=24.27%, avg=3262.50, stdev=921.72, samples=6 00:38:37.232 iops : min= 357, max= 940, avg=815.50, stdev=230.73, samples=6 00:38:37.232 lat (usec) : 500=0.04%, 750=2.74%, 1000=28.51% 00:38:37.232 lat (msec) : 2=68.15%, 10=0.04%, 50=0.48% 00:38:37.232 cpu : usr=0.50%, sys=2.68%, ctx=2483, majf=0, minf=2 00:38:37.232 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:37.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:37.232 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:37.232 issued rwts: total=2480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:37.232 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:37.232 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2671757: Wed Nov 27 07:33:48 2024 00:38:37.232 read: IOPS=1244, BW=4976KiB/s (5095kB/s)(13.5MiB/2788msec) 00:38:37.232 slat (usec): min=6, max=17416, avg=33.26, stdev=370.08 00:38:37.232 clat (usec): min=254, max=42040, avg=758.78, stdev=1216.86 00:38:37.232 lat (usec): min=261, max=42066, avg=792.04, stdev=1272.69 00:38:37.232 clat percentiles (usec): 00:38:37.232 | 1.00th=[ 449], 5.00th=[ 537], 10.00th=[ 578], 20.00th=[ 635], 00:38:37.232 | 30.00th=[ 676], 40.00th=[ 701], 50.00th=[ 734], 60.00th=[ 758], 00:38:37.232 | 70.00th=[ 791], 80.00th=[ 816], 90.00th=[ 840], 95.00th=[ 873], 00:38:37.232 | 99.00th=[ 922], 99.50th=[ 963], 99.90th=[ 2540], 99.95th=[41681], 00:38:37.232 | 99.99th=[42206] 00:38:37.232 bw ( KiB/s): min= 3928, max= 5344, per=37.36%, avg=5020.80, stdev=612.26, samples=5 00:38:37.232 iops : min= 982, max= 1336, avg=1255.20, stdev=153.07, samples=5 00:38:37.232 lat (usec) : 500=2.80%, 750=53.82%, 1000=43.01% 00:38:37.232 lat (msec) : 2=0.20%, 4=0.06%, 50=0.09% 00:38:37.232 cpu : usr=1.72%, sys=4.77%, ctx=3472, majf=0, minf=2 00:38:37.232 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:37.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:37.232 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:37.232 issued rwts: total=3469,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:37.232 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:37.232 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2671759: Wed Nov 27 07:33:48 2024 00:38:37.232 read: IOPS=909, BW=3638KiB/s (3725kB/s)(9476KiB/2605msec) 00:38:37.232 slat (nsec): min=26016, max=71678, avg=27069.54, stdev=3239.28 00:38:37.232 clat (usec): min=627, max=41183, avg=1054.97, stdev=830.38 00:38:37.232 lat (usec): min=653, max=41210, avg=1082.04, stdev=830.36 00:38:37.232 clat percentiles (usec): 00:38:37.232 | 1.00th=[ 766], 5.00th=[ 857], 10.00th=[ 922], 20.00th=[ 971], 00:38:37.232 | 30.00th=[ 1004], 40.00th=[ 1029], 50.00th=[ 1045], 60.00th=[ 1074], 00:38:37.232 | 70.00th=[ 1090], 80.00th=[ 1106], 90.00th=[ 1139], 95.00th=[ 1172], 00:38:37.232 | 99.00th=[ 1221], 99.50th=[ 1254], 99.90th=[ 1319], 99.95th=[ 2376], 00:38:37.232 | 99.99th=[41157] 00:38:37.232 bw ( KiB/s): min= 3416, max= 3768, per=27.33%, avg=3673.60, stdev=147.34, samples=5 00:38:37.232 iops : min= 854, max= 942, avg=918.40, stdev=36.83, samples=5 00:38:37.232 lat (usec) : 750=0.55%, 1000=28.31% 00:38:37.232 lat (msec) : 2=71.01%, 4=0.04%, 50=0.04% 00:38:37.232 cpu : usr=1.50%, sys=3.80%, ctx=2371, majf=0, minf=2 00:38:37.232 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:37.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:37.232 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:37.232 issued rwts: total=2370,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:37.232 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:37.232 00:38:37.232 Run status group 0 (all jobs): 00:38:37.232 READ: bw=13.1MiB/s (13.8MB/s), 3124KiB/s-4976KiB/s (3199kB/s-5095kB/s), io=41.7MiB (43.7MB), run=2605-3174msec 00:38:37.232 00:38:37.232 Disk stats (read/write): 00:38:37.232 nvme0n1: ios=2343/0, merge=0/0, ticks=2637/0, in_queue=2637, util=93.69% 00:38:37.232 nvme0n2: ios=2477/0, merge=0/0, ticks=2902/0, in_queue=2902, util=94.89% 00:38:37.232 nvme0n3: ios=3258/0, merge=0/0, ticks=2163/0, in_queue=2163, util=95.99% 00:38:37.232 nvme0n4: ios=2370/0, merge=0/0, ticks=2285/0, in_queue=2285, util=96.42% 00:38:37.232 07:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:38:37.232 07:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:38:37.492 07:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:38:37.492 07:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:38:37.753 07:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:38:37.753 07:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:38:38.013 07:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:38:38.013 07:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:38:38.013 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:38:38.013 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2671569 00:38:38.013 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:38:38.013 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:38:38.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:38:38.303 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:38:38.303 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:38:38.303 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:38:38.303 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:38.303 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:38:38.303 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:38.303 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:38:38.303 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:38:38.303 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:38:38.303 nvmf hotplug test: fio failed as expected 00:38:38.303 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:38.303 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:38:38.303 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:38:38.303 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:38:38.303 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:38:38.303 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:38:38.303 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:38.303 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:38:38.303 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:38.303 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:38:38.303 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:38.303 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:38.583 rmmod nvme_tcp 00:38:38.584 rmmod nvme_fabrics 00:38:38.584 rmmod nvme_keyring 00:38:38.584 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:38.584 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:38:38.584 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:38:38.584 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2668359 ']' 00:38:38.584 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2668359 00:38:38.584 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2668359 ']' 00:38:38.584 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2668359 00:38:38.584 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:38:38.584 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:38.584 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2668359 00:38:38.584 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:38.584 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:38.584 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2668359' 00:38:38.584 killing process with pid 2668359 00:38:38.584 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2668359 00:38:38.584 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2668359 00:38:38.584 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:38.584 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:38.584 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:38.584 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:38:38.584 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:38:38.584 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:38.584 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:38:38.584 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:38.584 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:38.584 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:38.584 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:38.584 07:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:41.171 07:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:41.171 00:38:41.171 real 0m28.353s 00:38:41.171 user 2m27.117s 00:38:41.171 sys 0m12.365s 00:38:41.172 07:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:41.172 07:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:38:41.172 ************************************ 00:38:41.172 END TEST nvmf_fio_target 00:38:41.172 ************************************ 00:38:41.172 07:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:38:41.172 07:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:41.172 07:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:41.172 07:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:41.172 ************************************ 00:38:41.172 START TEST nvmf_bdevio 00:38:41.172 ************************************ 00:38:41.172 07:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:38:41.172 * Looking for test storage... 00:38:41.172 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:41.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.172 --rc genhtml_branch_coverage=1 00:38:41.172 --rc genhtml_function_coverage=1 00:38:41.172 --rc genhtml_legend=1 00:38:41.172 --rc geninfo_all_blocks=1 00:38:41.172 --rc geninfo_unexecuted_blocks=1 00:38:41.172 00:38:41.172 ' 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:41.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.172 --rc genhtml_branch_coverage=1 00:38:41.172 --rc genhtml_function_coverage=1 00:38:41.172 --rc genhtml_legend=1 00:38:41.172 --rc geninfo_all_blocks=1 00:38:41.172 --rc geninfo_unexecuted_blocks=1 00:38:41.172 00:38:41.172 ' 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:41.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.172 --rc genhtml_branch_coverage=1 00:38:41.172 --rc genhtml_function_coverage=1 00:38:41.172 --rc genhtml_legend=1 00:38:41.172 --rc geninfo_all_blocks=1 00:38:41.172 --rc geninfo_unexecuted_blocks=1 00:38:41.172 00:38:41.172 ' 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:41.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.172 --rc genhtml_branch_coverage=1 00:38:41.172 --rc genhtml_function_coverage=1 00:38:41.172 --rc genhtml_legend=1 00:38:41.172 --rc geninfo_all_blocks=1 00:38:41.172 --rc geninfo_unexecuted_blocks=1 00:38:41.172 00:38:41.172 ' 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.172 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.173 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.173 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:38:41.173 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.173 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:38:41.173 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:41.173 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:41.173 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:41.173 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:41.173 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:41.173 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:41.173 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:41.173 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:41.173 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:41.173 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:41.173 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:41.173 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:41.173 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:38:41.173 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:41.173 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:41.173 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:41.173 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:41.173 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:41.173 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:41.173 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:41.173 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:41.173 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:41.173 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:41.173 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:38:41.173 07:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:49.306 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:49.306 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:49.306 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:49.306 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:49.306 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:49.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:49.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.535 ms 00:38:49.307 00:38:49.307 --- 10.0.0.2 ping statistics --- 00:38:49.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:49.307 rtt min/avg/max/mdev = 0.535/0.535/0.535/0.000 ms 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:49.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:49.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:38:49.307 00:38:49.307 --- 10.0.0.1 ping statistics --- 00:38:49.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:49.307 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2676783 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2676783 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2676783 ']' 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:49.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:49.307 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:49.307 [2024-11-27 07:33:59.682113] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:49.307 [2024-11-27 07:33:59.683614] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:38:49.307 [2024-11-27 07:33:59.683676] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:49.307 [2024-11-27 07:33:59.784617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:49.307 [2024-11-27 07:33:59.836763] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:49.307 [2024-11-27 07:33:59.836813] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:49.307 [2024-11-27 07:33:59.836821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:49.307 [2024-11-27 07:33:59.836828] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:49.307 [2024-11-27 07:33:59.836835] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:49.307 [2024-11-27 07:33:59.839105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:49.307 [2024-11-27 07:33:59.839268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:38:49.307 [2024-11-27 07:33:59.839713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:38:49.307 [2024-11-27 07:33:59.839717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:49.307 [2024-11-27 07:33:59.918102] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:49.307 [2024-11-27 07:33:59.919044] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:49.307 [2024-11-27 07:33:59.919347] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:49.307 [2024-11-27 07:33:59.919914] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:49.307 [2024-11-27 07:33:59.919942] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:49.307 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:49.307 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:38:49.307 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:49.307 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:49.307 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:49.568 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:49.568 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:49.568 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:49.568 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:49.568 [2024-11-27 07:34:00.540710] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:49.568 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:49.568 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:49.568 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:49.568 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:49.568 Malloc0 00:38:49.568 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:49.568 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:49.568 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:49.568 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:49.568 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:49.568 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:49.568 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:49.568 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:49.568 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:49.568 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:49.568 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:49.568 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:49.568 [2024-11-27 07:34:00.636989] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:49.568 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:49.568 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:38:49.568 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:38:49.568 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:38:49.568 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:38:49.568 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:49.568 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:49.568 { 00:38:49.568 "params": { 00:38:49.568 "name": "Nvme$subsystem", 00:38:49.568 "trtype": "$TEST_TRANSPORT", 00:38:49.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:49.568 "adrfam": "ipv4", 00:38:49.568 "trsvcid": "$NVMF_PORT", 00:38:49.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:49.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:49.568 "hdgst": ${hdgst:-false}, 00:38:49.568 "ddgst": ${ddgst:-false} 00:38:49.568 }, 00:38:49.568 "method": "bdev_nvme_attach_controller" 00:38:49.568 } 00:38:49.568 EOF 00:38:49.568 )") 00:38:49.568 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:38:49.568 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:38:49.568 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:38:49.568 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:49.568 "params": { 00:38:49.568 "name": "Nvme1", 00:38:49.568 "trtype": "tcp", 00:38:49.568 "traddr": "10.0.0.2", 00:38:49.568 "adrfam": "ipv4", 00:38:49.568 "trsvcid": "4420", 00:38:49.568 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:49.568 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:49.568 "hdgst": false, 00:38:49.568 "ddgst": false 00:38:49.568 }, 00:38:49.568 "method": "bdev_nvme_attach_controller" 00:38:49.568 }' 00:38:49.568 [2024-11-27 07:34:00.695723] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:38:49.568 [2024-11-27 07:34:00.695799] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2677116 ] 00:38:49.828 [2024-11-27 07:34:00.790676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:49.828 [2024-11-27 07:34:00.846779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:49.828 [2024-11-27 07:34:00.846942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:49.828 [2024-11-27 07:34:00.846942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:50.088 I/O targets: 00:38:50.088 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:38:50.088 00:38:50.088 00:38:50.088 CUnit - A unit testing framework for C - Version 2.1-3 00:38:50.088 http://cunit.sourceforge.net/ 00:38:50.088 00:38:50.088 00:38:50.088 Suite: bdevio tests on: Nvme1n1 00:38:50.088 Test: blockdev write read block ...passed 00:38:50.088 Test: blockdev write zeroes read block ...passed 00:38:50.088 Test: blockdev write zeroes read no split ...passed 00:38:50.088 Test: blockdev write zeroes read split ...passed 00:38:50.088 Test: blockdev write zeroes read split partial ...passed 00:38:50.088 Test: blockdev reset ...[2024-11-27 07:34:01.213684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:38:50.088 [2024-11-27 07:34:01.213797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2500970 (9): Bad file descriptor 00:38:50.088 [2024-11-27 07:34:01.261573] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:38:50.088 passed 00:38:50.348 Test: blockdev write read 8 blocks ...passed 00:38:50.348 Test: blockdev write read size > 128k ...passed 00:38:50.348 Test: blockdev write read invalid size ...passed 00:38:50.348 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:38:50.348 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:38:50.348 Test: blockdev write read max offset ...passed 00:38:50.348 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:38:50.348 Test: blockdev writev readv 8 blocks ...passed 00:38:50.348 Test: blockdev writev readv 30 x 1block ...passed 00:38:50.348 Test: blockdev writev readv block ...passed 00:38:50.348 Test: blockdev writev readv size > 128k ...passed 00:38:50.348 Test: blockdev writev readv size > 128k in two iovs ...passed 00:38:50.348 Test: blockdev comparev and writev ...[2024-11-27 07:34:01.528301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:50.348 [2024-11-27 07:34:01.528348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:50.348 [2024-11-27 07:34:01.528365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:50.348 [2024-11-27 07:34:01.528374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:50.348 [2024-11-27 07:34:01.528981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:50.348 [2024-11-27 07:34:01.528993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:38:50.348 [2024-11-27 07:34:01.529007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:50.348 [2024-11-27 07:34:01.529015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:38:50.348 [2024-11-27 07:34:01.529639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:50.348 [2024-11-27 07:34:01.529651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:38:50.348 [2024-11-27 07:34:01.529665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:50.349 [2024-11-27 07:34:01.529672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:38:50.349 [2024-11-27 07:34:01.530291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:50.349 [2024-11-27 07:34:01.530303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:38:50.349 [2024-11-27 07:34:01.530325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:50.349 [2024-11-27 07:34:01.530333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:38:50.608 passed 00:38:50.609 Test: blockdev nvme passthru rw ...passed 00:38:50.609 Test: blockdev nvme passthru vendor specific ...[2024-11-27 07:34:01.616023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:38:50.609 [2024-11-27 07:34:01.616041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:38:50.609 [2024-11-27 07:34:01.616287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:38:50.609 [2024-11-27 07:34:01.616298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:50.609 [2024-11-27 07:34:01.616661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:38:50.609 [2024-11-27 07:34:01.616672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:38:50.609 [2024-11-27 07:34:01.617033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:38:50.609 [2024-11-27 07:34:01.617044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:38:50.609 passed 00:38:50.609 Test: blockdev nvme admin passthru ...passed 00:38:50.609 Test: blockdev copy ...passed 00:38:50.609 00:38:50.609 Run Summary: Type Total Ran Passed Failed Inactive 00:38:50.609 suites 1 1 n/a 0 0 00:38:50.609 tests 23 23 23 0 0 00:38:50.609 asserts 152 152 152 0 n/a 00:38:50.609 00:38:50.609 Elapsed time = 1.262 seconds 00:38:50.609 07:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:50.609 07:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.609 07:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:50.869 07:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.869 07:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:38:50.869 07:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:38:50.869 07:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:50.869 07:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:38:50.869 07:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:50.869 07:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:38:50.869 07:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:50.869 07:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:50.869 rmmod nvme_tcp 00:38:50.869 rmmod nvme_fabrics 00:38:50.869 rmmod nvme_keyring 00:38:50.869 07:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:50.869 07:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:38:50.869 07:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:38:50.869 07:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2676783 ']' 00:38:50.869 07:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2676783 00:38:50.869 07:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2676783 ']' 00:38:50.869 07:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2676783 00:38:50.869 07:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:38:50.869 07:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:50.869 07:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2676783 00:38:50.869 07:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:38:50.869 07:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:38:50.869 07:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2676783' 00:38:50.869 killing process with pid 2676783 00:38:50.869 07:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2676783 00:38:50.869 07:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2676783 00:38:51.129 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:51.129 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:51.129 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:51.129 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:38:51.129 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:38:51.129 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:51.129 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:38:51.129 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:51.129 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:51.129 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:51.129 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:51.129 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:53.039 07:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:53.039 00:38:53.039 real 0m12.331s 00:38:53.039 user 0m10.040s 00:38:53.039 sys 0m6.476s 00:38:53.039 07:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:53.039 07:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:53.039 ************************************ 00:38:53.039 END TEST nvmf_bdevio 00:38:53.039 ************************************ 00:38:53.299 07:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:38:53.299 00:38:53.299 real 5m2.511s 00:38:53.299 user 10m23.001s 00:38:53.299 sys 2m4.651s 00:38:53.299 07:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:53.299 07:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:53.299 ************************************ 00:38:53.299 END TEST nvmf_target_core_interrupt_mode 00:38:53.299 ************************************ 00:38:53.299 07:34:04 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:38:53.299 07:34:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:53.299 07:34:04 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:53.299 07:34:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:53.299 ************************************ 00:38:53.299 START TEST nvmf_interrupt 00:38:53.299 ************************************ 00:38:53.299 07:34:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:38:53.299 * Looking for test storage... 00:38:53.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:53.299 07:34:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:53.299 07:34:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:38:53.299 07:34:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:53.560 07:34:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:53.560 07:34:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:53.560 07:34:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:53.560 07:34:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:53.560 07:34:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:38:53.560 07:34:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:38:53.560 07:34:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:38:53.560 07:34:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:38:53.560 07:34:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:38:53.560 07:34:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:38:53.560 07:34:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:38:53.560 07:34:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:53.560 07:34:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:38:53.560 07:34:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:38:53.560 07:34:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:53.560 07:34:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:53.560 07:34:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:38:53.560 07:34:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:38:53.560 07:34:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:53.560 07:34:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:38:53.560 07:34:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:38:53.560 07:34:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:38:53.560 07:34:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:38:53.560 07:34:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:53.560 07:34:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:38:53.560 07:34:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:38:53.560 07:34:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:53.560 07:34:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:53.560 07:34:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:38:53.560 07:34:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:53.560 07:34:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:53.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:53.560 --rc genhtml_branch_coverage=1 00:38:53.560 --rc genhtml_function_coverage=1 00:38:53.560 --rc genhtml_legend=1 00:38:53.560 --rc geninfo_all_blocks=1 00:38:53.560 --rc geninfo_unexecuted_blocks=1 00:38:53.560 00:38:53.560 ' 00:38:53.560 07:34:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:53.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:53.560 --rc genhtml_branch_coverage=1 00:38:53.560 --rc genhtml_function_coverage=1 00:38:53.560 --rc genhtml_legend=1 00:38:53.560 --rc geninfo_all_blocks=1 00:38:53.560 --rc geninfo_unexecuted_blocks=1 00:38:53.560 00:38:53.560 ' 00:38:53.560 07:34:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:53.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:53.561 --rc genhtml_branch_coverage=1 00:38:53.561 --rc genhtml_function_coverage=1 00:38:53.561 --rc genhtml_legend=1 00:38:53.561 --rc geninfo_all_blocks=1 00:38:53.561 --rc geninfo_unexecuted_blocks=1 00:38:53.561 00:38:53.561 ' 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:53.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:53.561 --rc genhtml_branch_coverage=1 00:38:53.561 --rc genhtml_function_coverage=1 00:38:53.561 --rc genhtml_legend=1 00:38:53.561 --rc geninfo_all_blocks=1 00:38:53.561 --rc geninfo_unexecuted_blocks=1 00:38:53.561 00:38:53.561 ' 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:38:53.561 07:34:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:01.696 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:01.696 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:39:01.696 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:01.696 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:01.696 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:01.696 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:01.696 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:01.696 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:39:01.696 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:01.696 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:39:01.696 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:39:01.696 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:39:01.696 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:39:01.696 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:39:01.696 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:39:01.696 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:01.696 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:01.696 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:01.696 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:01.696 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:01.696 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:01.696 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:01.696 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:01.696 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:01.696 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:01.696 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:01.696 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:01.696 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:01.696 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:01.696 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:39:01.697 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:39:01.697 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:39:01.697 Found net devices under 0000:4b:00.0: cvl_0_0 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:39:01.697 Found net devices under 0000:4b:00.1: cvl_0_1 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:01.697 07:34:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:01.697 07:34:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:01.697 07:34:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:01.697 07:34:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:01.697 07:34:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:01.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:01.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:39:01.697 00:39:01.697 --- 10.0.0.2 ping statistics --- 00:39:01.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:01.697 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:39:01.697 07:34:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:01.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:01.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:39:01.697 00:39:01.697 --- 10.0.0.1 ping statistics --- 00:39:01.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:01.697 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:39:01.697 07:34:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:01.697 07:34:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:39:01.697 07:34:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:01.697 07:34:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:01.697 07:34:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:01.697 07:34:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:01.697 07:34:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:01.697 07:34:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:01.697 07:34:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:01.697 07:34:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:39:01.697 07:34:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:01.697 07:34:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:01.697 07:34:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:01.697 07:34:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2681488 00:39:01.697 07:34:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2681488 00:39:01.697 07:34:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:39:01.697 07:34:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 2681488 ']' 00:39:01.697 07:34:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:01.697 07:34:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:01.697 07:34:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:01.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:01.697 07:34:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:01.697 07:34:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:01.697 [2024-11-27 07:34:12.234844] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:01.697 [2024-11-27 07:34:12.235955] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:39:01.697 [2024-11-27 07:34:12.236003] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:01.697 [2024-11-27 07:34:12.322471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:01.697 [2024-11-27 07:34:12.374062] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:01.697 [2024-11-27 07:34:12.374108] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:01.697 [2024-11-27 07:34:12.374117] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:01.697 [2024-11-27 07:34:12.374124] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:01.697 [2024-11-27 07:34:12.374133] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:01.697 [2024-11-27 07:34:12.375765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:01.697 [2024-11-27 07:34:12.375771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:01.697 [2024-11-27 07:34:12.453427] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:01.697 [2024-11-27 07:34:12.454179] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:01.697 [2024-11-27 07:34:12.454362] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:01.958 07:34:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:01.958 07:34:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:39:01.958 07:34:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:01.958 07:34:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:01.958 07:34:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:01.958 07:34:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:01.958 07:34:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:39:01.958 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:39:01.958 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:39:01.958 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:39:01.958 5000+0 records in 00:39:01.958 5000+0 records out 00:39:01.958 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0191264 s, 535 MB/s 00:39:01.958 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:39:01.958 07:34:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.958 07:34:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:02.219 AIO0 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:02.219 [2024-11-27 07:34:13.176783] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:02.219 [2024-11-27 07:34:13.221196] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2681488 0 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2681488 0 idle 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2681488 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2681488 -w 256 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2681488 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.30 reactor_0' 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2681488 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.30 reactor_0 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2681488 1 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2681488 1 idle 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2681488 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:02.219 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:02.480 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2681488 -w 256 00:39:02.480 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:02.480 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2681496 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1' 00:39:02.480 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2681496 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1 00:39:02.480 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:02.480 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:02.480 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:02.480 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:02.480 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:02.480 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:02.480 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:02.480 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:02.480 07:34:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:39:02.480 07:34:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2681849 00:39:02.480 07:34:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:39:02.480 07:34:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:39:02.480 07:34:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:39:02.480 07:34:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2681488 0 00:39:02.480 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2681488 0 busy 00:39:02.480 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2681488 00:39:02.480 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:02.480 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:39:02.480 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:39:02.480 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:02.480 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:39:02.480 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:02.480 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:02.480 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:02.480 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2681488 -w 256 00:39:02.480 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:02.740 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2681488 root 20 0 128.2g 43776 32256 S 6.7 0.0 0:00.31 reactor_0' 00:39:02.740 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2681488 root 20 0 128.2g 43776 32256 S 6.7 0.0 0:00.31 reactor_0 00:39:02.740 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:02.740 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:02.740 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:39:02.740 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:39:02.740 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:39:02.740 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:39:02.740 07:34:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:39:03.682 07:34:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:39:03.682 07:34:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:03.682 07:34:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2681488 -w 256 00:39:03.682 07:34:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:03.942 07:34:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2681488 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.59 reactor_0' 00:39:03.942 07:34:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2681488 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.59 reactor_0 00:39:03.942 07:34:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:03.942 07:34:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:03.942 07:34:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:39:03.942 07:34:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:39:03.942 07:34:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:39:03.942 07:34:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:39:03.942 07:34:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:39:03.942 07:34:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:03.942 07:34:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:39:03.942 07:34:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:39:03.942 07:34:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2681488 1 00:39:03.942 07:34:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2681488 1 busy 00:39:03.942 07:34:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2681488 00:39:03.942 07:34:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:03.942 07:34:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:39:03.942 07:34:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:39:03.942 07:34:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:03.942 07:34:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:39:03.942 07:34:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:03.942 07:34:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:03.942 07:34:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:03.942 07:34:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2681488 -w 256 00:39:03.942 07:34:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:04.202 07:34:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2681496 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:01.34 reactor_1' 00:39:04.202 07:34:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2681496 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:01.34 reactor_1 00:39:04.202 07:34:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:04.202 07:34:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:04.202 07:34:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:39:04.202 07:34:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:39:04.202 07:34:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:39:04.202 07:34:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:39:04.202 07:34:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:39:04.202 07:34:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:04.202 07:34:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2681849 00:39:14.195 Initializing NVMe Controllers 00:39:14.195 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:14.195 Controller IO queue size 256, less than required. 00:39:14.195 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:14.195 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:14.195 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:14.195 Initialization complete. Launching workers. 00:39:14.195 ======================================================== 00:39:14.195 Latency(us) 00:39:14.195 Device Information : IOPS MiB/s Average min max 00:39:14.195 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 20030.50 78.24 12784.94 3998.27 52430.78 00:39:14.195 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19740.50 77.11 12970.02 7451.20 31394.53 00:39:14.195 ======================================================== 00:39:14.195 Total : 39771.00 155.36 12876.80 3998.27 52430.78 00:39:14.195 00:39:14.195 07:34:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:39:14.195 07:34:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2681488 0 00:39:14.195 07:34:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2681488 0 idle 00:39:14.195 07:34:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2681488 00:39:14.195 07:34:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:14.195 07:34:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:14.195 07:34:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:14.195 07:34:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:14.195 07:34:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:14.195 07:34:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:14.195 07:34:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:14.195 07:34:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:14.195 07:34:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:14.195 07:34:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2681488 -w 256 00:39:14.195 07:34:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:14.195 07:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2681488 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.30 reactor_0' 00:39:14.195 07:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2681488 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.30 reactor_0 00:39:14.195 07:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:14.195 07:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:14.195 07:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:14.195 07:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:14.195 07:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:14.195 07:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:14.195 07:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:14.195 07:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:14.195 07:34:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:39:14.195 07:34:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2681488 1 00:39:14.195 07:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2681488 1 idle 00:39:14.195 07:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2681488 00:39:14.195 07:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:14.195 07:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:14.195 07:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:14.195 07:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:14.195 07:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:14.195 07:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:14.195 07:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:14.196 07:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:14.196 07:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:14.196 07:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2681488 -w 256 00:39:14.196 07:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:14.196 07:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2681496 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.01 reactor_1' 00:39:14.196 07:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2681496 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.01 reactor_1 00:39:14.196 07:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:14.196 07:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:14.196 07:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:14.196 07:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:14.196 07:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:14.196 07:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:14.196 07:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:14.196 07:34:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:14.196 07:34:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:14.196 07:34:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:39:14.196 07:34:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:39:14.196 07:34:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:39:14.196 07:34:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:39:14.196 07:34:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:39:16.110 07:34:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:39:16.110 07:34:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:39:16.110 07:34:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:39:16.110 07:34:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:39:16.110 07:34:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:39:16.110 07:34:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:39:16.110 07:34:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:39:16.110 07:34:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2681488 0 00:39:16.110 07:34:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2681488 0 idle 00:39:16.110 07:34:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2681488 00:39:16.110 07:34:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:16.110 07:34:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:16.110 07:34:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:16.110 07:34:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:16.110 07:34:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:16.110 07:34:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:16.110 07:34:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:16.111 07:34:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:16.111 07:34:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:16.111 07:34:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2681488 -w 256 00:39:16.111 07:34:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:16.111 07:34:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2681488 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.67 reactor_0' 00:39:16.111 07:34:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2681488 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.67 reactor_0 00:39:16.111 07:34:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:16.111 07:34:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:16.111 07:34:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:16.111 07:34:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:16.111 07:34:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:16.111 07:34:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:16.111 07:34:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:16.111 07:34:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:16.111 07:34:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:39:16.111 07:34:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2681488 1 00:39:16.111 07:34:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2681488 1 idle 00:39:16.111 07:34:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2681488 00:39:16.111 07:34:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:16.111 07:34:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:16.111 07:34:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:16.111 07:34:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:16.111 07:34:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:16.111 07:34:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:16.111 07:34:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:16.111 07:34:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:16.111 07:34:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:16.111 07:34:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2681488 -w 256 00:39:16.111 07:34:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:16.373 07:34:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2681496 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1' 00:39:16.373 07:34:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2681496 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1 00:39:16.373 07:34:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:16.373 07:34:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:16.373 07:34:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:16.373 07:34:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:16.373 07:34:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:16.373 07:34:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:16.373 07:34:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:16.373 07:34:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:16.373 07:34:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:16.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:16.635 07:34:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:16.635 07:34:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:39:16.635 07:34:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:39:16.635 07:34:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:16.635 07:34:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:39:16.635 07:34:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:16.635 07:34:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:39:16.635 07:34:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:39:16.635 07:34:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:39:16.635 07:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:16.635 07:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:39:16.635 07:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:16.635 07:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:39:16.635 07:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:16.635 07:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:16.635 rmmod nvme_tcp 00:39:16.635 rmmod nvme_fabrics 00:39:16.635 rmmod nvme_keyring 00:39:16.635 07:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:16.635 07:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:39:16.635 07:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:39:16.635 07:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2681488 ']' 00:39:16.635 07:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2681488 00:39:16.635 07:34:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 2681488 ']' 00:39:16.635 07:34:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 2681488 00:39:16.635 07:34:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:39:16.635 07:34:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:16.635 07:34:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2681488 00:39:16.635 07:34:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:16.635 07:34:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:16.635 07:34:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2681488' 00:39:16.635 killing process with pid 2681488 00:39:16.635 07:34:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 2681488 00:39:16.635 07:34:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 2681488 00:39:16.897 07:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:16.897 07:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:16.897 07:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:16.897 07:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:39:16.897 07:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:39:16.897 07:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:39:16.897 07:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:16.897 07:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:16.897 07:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:16.897 07:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:16.897 07:34:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:16.897 07:34:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:18.836 07:34:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:19.097 00:39:19.097 real 0m25.676s 00:39:19.097 user 0m40.847s 00:39:19.097 sys 0m9.518s 00:39:19.097 07:34:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:19.097 07:34:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:19.097 ************************************ 00:39:19.097 END TEST nvmf_interrupt 00:39:19.097 ************************************ 00:39:19.097 00:39:19.097 real 30m16.211s 00:39:19.097 user 61m55.837s 00:39:19.097 sys 10m18.933s 00:39:19.097 07:34:30 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:19.097 07:34:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:19.097 ************************************ 00:39:19.097 END TEST nvmf_tcp 00:39:19.097 ************************************ 00:39:19.097 07:34:30 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:39:19.097 07:34:30 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:39:19.097 07:34:30 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:19.097 07:34:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:19.097 07:34:30 -- common/autotest_common.sh@10 -- # set +x 00:39:19.097 ************************************ 00:39:19.097 START TEST spdkcli_nvmf_tcp 00:39:19.097 ************************************ 00:39:19.097 07:34:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:39:19.097 * Looking for test storage... 00:39:19.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:39:19.097 07:34:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:19.097 07:34:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:39:19.097 07:34:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:19.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:19.359 --rc genhtml_branch_coverage=1 00:39:19.359 --rc genhtml_function_coverage=1 00:39:19.359 --rc genhtml_legend=1 00:39:19.359 --rc geninfo_all_blocks=1 00:39:19.359 --rc geninfo_unexecuted_blocks=1 00:39:19.359 00:39:19.359 ' 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:19.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:19.359 --rc genhtml_branch_coverage=1 00:39:19.359 --rc genhtml_function_coverage=1 00:39:19.359 --rc genhtml_legend=1 00:39:19.359 --rc geninfo_all_blocks=1 00:39:19.359 --rc geninfo_unexecuted_blocks=1 00:39:19.359 00:39:19.359 ' 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:19.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:19.359 --rc genhtml_branch_coverage=1 00:39:19.359 --rc genhtml_function_coverage=1 00:39:19.359 --rc genhtml_legend=1 00:39:19.359 --rc geninfo_all_blocks=1 00:39:19.359 --rc geninfo_unexecuted_blocks=1 00:39:19.359 00:39:19.359 ' 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:19.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:19.359 --rc genhtml_branch_coverage=1 00:39:19.359 --rc genhtml_function_coverage=1 00:39:19.359 --rc genhtml_legend=1 00:39:19.359 --rc geninfo_all_blocks=1 00:39:19.359 --rc geninfo_unexecuted_blocks=1 00:39:19.359 00:39:19.359 ' 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:19.359 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2685056 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2685056 00:39:19.359 07:34:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 2685056 ']' 00:39:19.360 07:34:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:19.360 07:34:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:39:19.360 07:34:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:19.360 07:34:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:19.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:19.360 07:34:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:19.360 07:34:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:19.360 [2024-11-27 07:34:30.464941] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:39:19.360 [2024-11-27 07:34:30.465034] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2685056 ] 00:39:19.360 [2024-11-27 07:34:30.560119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:19.621 [2024-11-27 07:34:30.614607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:19.621 [2024-11-27 07:34:30.614611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:20.192 07:34:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:20.192 07:34:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:39:20.192 07:34:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:39:20.192 07:34:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:20.192 07:34:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:20.192 07:34:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:39:20.192 07:34:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:39:20.192 07:34:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:39:20.192 07:34:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:20.192 07:34:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:20.192 07:34:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:39:20.192 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:39:20.192 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:39:20.192 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:39:20.192 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:39:20.192 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:39:20.192 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:39:20.192 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:39:20.192 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:39:20.192 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:39:20.192 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:20.192 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:20.192 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:39:20.192 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:20.192 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:20.192 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:39:20.192 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:20.192 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:39:20.192 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:39:20.192 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:20.192 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:39:20.192 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:39:20.192 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:39:20.192 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:39:20.192 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:20.192 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:39:20.192 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:39:20.192 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:39:20.192 ' 00:39:23.495 [2024-11-27 07:34:34.020326] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:24.504 [2024-11-27 07:34:35.384498] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:39:27.047 [2024-11-27 07:34:37.911550] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:39:28.957 [2024-11-27 07:34:40.137991] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:39:30.869 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:39:30.869 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:39:30.869 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:39:30.869 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:39:30.869 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:39:30.869 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:39:30.869 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:39:30.869 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:39:30.869 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:39:30.869 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:39:30.869 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:39:30.869 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:30.869 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:39:30.869 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:39:30.870 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:30.870 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:39:30.870 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:39:30.870 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:39:30.870 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:39:30.870 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:30.870 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:39:30.870 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:39:30.870 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:39:30.870 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:39:30.870 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:30.870 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:39:30.870 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:39:30.870 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:39:30.870 07:34:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:39:30.870 07:34:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:30.870 07:34:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:30.870 07:34:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:39:30.870 07:34:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:30.870 07:34:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:30.870 07:34:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:39:30.870 07:34:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:39:31.440 07:34:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:39:31.440 07:34:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:39:31.440 07:34:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:39:31.440 07:34:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:31.440 07:34:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:31.440 07:34:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:39:31.440 07:34:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:31.440 07:34:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:31.440 07:34:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:39:31.440 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:39:31.440 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:39:31.440 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:39:31.440 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:39:31.440 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:39:31.440 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:39:31.440 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:39:31.440 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:39:31.440 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:39:31.440 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:39:31.440 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:39:31.440 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:39:31.440 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:39:31.440 ' 00:39:38.022 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:39:38.022 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:39:38.022 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:39:38.022 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:39:38.022 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:39:38.022 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:39:38.022 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:39:38.022 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:39:38.022 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:39:38.022 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:39:38.022 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:39:38.022 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:39:38.022 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:39:38.022 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:39:38.022 07:34:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:39:38.022 07:34:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:38.022 07:34:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:38.022 07:34:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2685056 00:39:38.022 07:34:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2685056 ']' 00:39:38.022 07:34:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2685056 00:39:38.022 07:34:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:39:38.022 07:34:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:38.022 07:34:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2685056 00:39:38.022 07:34:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:38.022 07:34:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:38.022 07:34:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2685056' 00:39:38.022 killing process with pid 2685056 00:39:38.022 07:34:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 2685056 00:39:38.022 07:34:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 2685056 00:39:38.022 07:34:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:39:38.022 07:34:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:39:38.022 07:34:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2685056 ']' 00:39:38.022 07:34:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2685056 00:39:38.022 07:34:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2685056 ']' 00:39:38.022 07:34:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2685056 00:39:38.022 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2685056) - No such process 00:39:38.022 07:34:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 2685056 is not found' 00:39:38.022 Process with pid 2685056 is not found 00:39:38.022 07:34:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:39:38.022 07:34:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:39:38.022 07:34:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:39:38.022 00:39:38.022 real 0m18.155s 00:39:38.022 user 0m40.310s 00:39:38.022 sys 0m0.900s 00:39:38.022 07:34:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:38.022 07:34:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:38.022 ************************************ 00:39:38.022 END TEST spdkcli_nvmf_tcp 00:39:38.022 ************************************ 00:39:38.022 07:34:48 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:39:38.022 07:34:48 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:38.022 07:34:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:38.022 07:34:48 -- common/autotest_common.sh@10 -- # set +x 00:39:38.022 ************************************ 00:39:38.022 START TEST nvmf_identify_passthru 00:39:38.022 ************************************ 00:39:38.022 07:34:48 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:39:38.022 * Looking for test storage... 00:39:38.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:38.022 07:34:48 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:38.022 07:34:48 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:39:38.022 07:34:48 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:38.022 07:34:48 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:38.022 07:34:48 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:38.022 07:34:48 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:38.022 07:34:48 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:38.022 07:34:48 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:39:38.022 07:34:48 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:39:38.022 07:34:48 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:39:38.022 07:34:48 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:39:38.022 07:34:48 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:39:38.022 07:34:48 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:39:38.022 07:34:48 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:39:38.022 07:34:48 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:38.022 07:34:48 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:39:38.022 07:34:48 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:39:38.022 07:34:48 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:38.022 07:34:48 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:38.022 07:34:48 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:39:38.022 07:34:48 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:39:38.022 07:34:48 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:38.022 07:34:48 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:39:38.022 07:34:48 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:39:38.022 07:34:48 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:39:38.022 07:34:48 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:39:38.022 07:34:48 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:38.022 07:34:48 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:39:38.022 07:34:48 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:39:38.022 07:34:48 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:38.022 07:34:48 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:38.022 07:34:48 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:39:38.022 07:34:48 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:38.023 07:34:48 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:38.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.023 --rc genhtml_branch_coverage=1 00:39:38.023 --rc genhtml_function_coverage=1 00:39:38.023 --rc genhtml_legend=1 00:39:38.023 --rc geninfo_all_blocks=1 00:39:38.023 --rc geninfo_unexecuted_blocks=1 00:39:38.023 00:39:38.023 ' 00:39:38.023 07:34:48 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:38.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.023 --rc genhtml_branch_coverage=1 00:39:38.023 --rc genhtml_function_coverage=1 00:39:38.023 --rc genhtml_legend=1 00:39:38.023 --rc geninfo_all_blocks=1 00:39:38.023 --rc geninfo_unexecuted_blocks=1 00:39:38.023 00:39:38.023 ' 00:39:38.023 07:34:48 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:38.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.023 --rc genhtml_branch_coverage=1 00:39:38.023 --rc genhtml_function_coverage=1 00:39:38.023 --rc genhtml_legend=1 00:39:38.023 --rc geninfo_all_blocks=1 00:39:38.023 --rc geninfo_unexecuted_blocks=1 00:39:38.023 00:39:38.023 ' 00:39:38.023 07:34:48 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:38.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.023 --rc genhtml_branch_coverage=1 00:39:38.023 --rc genhtml_function_coverage=1 00:39:38.023 --rc genhtml_legend=1 00:39:38.023 --rc geninfo_all_blocks=1 00:39:38.023 --rc geninfo_unexecuted_blocks=1 00:39:38.023 00:39:38.023 ' 00:39:38.023 07:34:48 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:38.023 07:34:48 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:39:38.023 07:34:48 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:38.023 07:34:48 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:38.023 07:34:48 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:38.023 07:34:48 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:38.023 07:34:48 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:38.023 07:34:48 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:38.023 07:34:48 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:38.023 07:34:48 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:38.023 07:34:48 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:38.023 07:34:48 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:38.023 07:34:48 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:38.023 07:34:48 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:38.023 07:34:48 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:38.023 07:34:48 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:38.023 07:34:48 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:38.023 07:34:48 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:38.023 07:34:48 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:38.023 07:34:48 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:39:38.023 07:34:48 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:38.023 07:34:48 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:38.023 07:34:48 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:38.023 07:34:48 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.023 07:34:48 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.023 07:34:48 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.023 07:34:48 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:39:38.023 07:34:48 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.023 07:34:48 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:39:38.023 07:34:48 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:38.023 07:34:48 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:38.023 07:34:48 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:38.023 07:34:48 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:38.023 07:34:48 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:38.023 07:34:48 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:38.023 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:38.023 07:34:48 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:38.023 07:34:48 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:38.023 07:34:48 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:38.023 07:34:48 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:38.023 07:34:48 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:39:38.023 07:34:48 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:38.023 07:34:48 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:38.023 07:34:48 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:38.023 07:34:48 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.023 07:34:48 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.023 07:34:48 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.023 07:34:48 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:39:38.023 07:34:48 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.023 07:34:48 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:39:38.023 07:34:48 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:38.023 07:34:48 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:38.023 07:34:48 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:38.023 07:34:48 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:38.023 07:34:48 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:38.023 07:34:48 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:38.023 07:34:48 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:38.023 07:34:48 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:38.023 07:34:48 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:38.023 07:34:48 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:38.023 07:34:48 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:39:38.023 07:34:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:39:44.612 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:39:44.612 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:39:44.612 Found net devices under 0000:4b:00.0: cvl_0_0 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:39:44.612 Found net devices under 0000:4b:00.1: cvl_0_1 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:44.612 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:44.873 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:44.873 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:44.873 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:44.873 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:44.873 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:44.873 07:34:55 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:44.874 07:34:56 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:45.135 07:34:56 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:45.135 07:34:56 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:45.135 07:34:56 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:45.135 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:45.135 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:39:45.135 00:39:45.135 --- 10.0.0.2 ping statistics --- 00:39:45.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:45.135 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:39:45.135 07:34:56 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:45.135 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:45.135 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:39:45.135 00:39:45.135 --- 10.0.0.1 ping statistics --- 00:39:45.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:45.135 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:39:45.135 07:34:56 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:45.135 07:34:56 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:39:45.135 07:34:56 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:45.135 07:34:56 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:45.135 07:34:56 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:45.135 07:34:56 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:45.135 07:34:56 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:45.135 07:34:56 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:45.135 07:34:56 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:45.135 07:34:56 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:39:45.135 07:34:56 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:45.135 07:34:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:45.135 07:34:56 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:39:45.135 07:34:56 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:39:45.135 07:34:56 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:39:45.135 07:34:56 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:39:45.135 07:34:56 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:39:45.135 07:34:56 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:39:45.135 07:34:56 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:39:45.135 07:34:56 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:39:45.135 07:34:56 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:39:45.135 07:34:56 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:39:45.135 07:34:56 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:39:45.135 07:34:56 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:39:45.135 07:34:56 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:39:45.135 07:34:56 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:39:45.135 07:34:56 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:39:45.135 07:34:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:39:45.135 07:34:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:39:45.135 07:34:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:39:45.709 07:34:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:39:45.709 07:34:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:39:45.709 07:34:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:39:45.709 07:34:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:39:46.281 07:34:57 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:39:46.281 07:34:57 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:39:46.281 07:34:57 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:46.281 07:34:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:46.281 07:34:57 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:39:46.281 07:34:57 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:46.281 07:34:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:46.281 07:34:57 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2692470 00:39:46.281 07:34:57 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:46.281 07:34:57 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:39:46.281 07:34:57 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2692470 00:39:46.281 07:34:57 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 2692470 ']' 00:39:46.281 07:34:57 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:46.281 07:34:57 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:46.281 07:34:57 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:46.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:46.281 07:34:57 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:46.281 07:34:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:46.281 [2024-11-27 07:34:57.378856] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:39:46.282 [2024-11-27 07:34:57.378924] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:46.282 [2024-11-27 07:34:57.477361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:46.542 [2024-11-27 07:34:57.531048] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:46.542 [2024-11-27 07:34:57.531098] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:46.542 [2024-11-27 07:34:57.531107] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:46.542 [2024-11-27 07:34:57.531115] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:46.542 [2024-11-27 07:34:57.531121] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:46.542 [2024-11-27 07:34:57.533377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:46.542 [2024-11-27 07:34:57.533625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:46.542 [2024-11-27 07:34:57.533786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:46.542 [2024-11-27 07:34:57.533788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:47.113 07:34:58 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:47.113 07:34:58 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:39:47.113 07:34:58 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:39:47.113 07:34:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:47.113 07:34:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:47.113 INFO: Log level set to 20 00:39:47.113 INFO: Requests: 00:39:47.113 { 00:39:47.113 "jsonrpc": "2.0", 00:39:47.113 "method": "nvmf_set_config", 00:39:47.113 "id": 1, 00:39:47.113 "params": { 00:39:47.113 "admin_cmd_passthru": { 00:39:47.113 "identify_ctrlr": true 00:39:47.113 } 00:39:47.113 } 00:39:47.113 } 00:39:47.113 00:39:47.113 INFO: response: 00:39:47.113 { 00:39:47.113 "jsonrpc": "2.0", 00:39:47.113 "id": 1, 00:39:47.113 "result": true 00:39:47.113 } 00:39:47.113 00:39:47.113 07:34:58 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:47.113 07:34:58 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:39:47.113 07:34:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:47.113 07:34:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:47.113 INFO: Setting log level to 20 00:39:47.113 INFO: Setting log level to 20 00:39:47.113 INFO: Log level set to 20 00:39:47.113 INFO: Log level set to 20 00:39:47.113 INFO: Requests: 00:39:47.113 { 00:39:47.113 "jsonrpc": "2.0", 00:39:47.113 "method": "framework_start_init", 00:39:47.113 "id": 1 00:39:47.113 } 00:39:47.113 00:39:47.113 INFO: Requests: 00:39:47.113 { 00:39:47.113 "jsonrpc": "2.0", 00:39:47.113 "method": "framework_start_init", 00:39:47.113 "id": 1 00:39:47.113 } 00:39:47.113 00:39:47.113 [2024-11-27 07:34:58.305837] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:39:47.113 INFO: response: 00:39:47.113 { 00:39:47.113 "jsonrpc": "2.0", 00:39:47.113 "id": 1, 00:39:47.113 "result": true 00:39:47.113 } 00:39:47.113 00:39:47.113 INFO: response: 00:39:47.113 { 00:39:47.113 "jsonrpc": "2.0", 00:39:47.113 "id": 1, 00:39:47.113 "result": true 00:39:47.113 } 00:39:47.113 00:39:47.113 07:34:58 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:47.113 07:34:58 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:47.113 07:34:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:47.113 07:34:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:47.113 INFO: Setting log level to 40 00:39:47.113 INFO: Setting log level to 40 00:39:47.113 INFO: Setting log level to 40 00:39:47.374 [2024-11-27 07:34:58.319397] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:47.374 07:34:58 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:47.374 07:34:58 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:39:47.374 07:34:58 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:47.374 07:34:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:47.374 07:34:58 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:39:47.374 07:34:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:47.374 07:34:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:47.635 Nvme0n1 00:39:47.635 07:34:58 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:47.635 07:34:58 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:39:47.635 07:34:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:47.635 07:34:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:47.635 07:34:58 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:47.635 07:34:58 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:39:47.635 07:34:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:47.635 07:34:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:47.635 07:34:58 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:47.635 07:34:58 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:47.635 07:34:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:47.635 07:34:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:47.635 [2024-11-27 07:34:58.729043] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:47.635 07:34:58 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:47.635 07:34:58 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:39:47.635 07:34:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:47.635 07:34:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:47.635 [ 00:39:47.635 { 00:39:47.635 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:39:47.635 "subtype": "Discovery", 00:39:47.635 "listen_addresses": [], 00:39:47.635 "allow_any_host": true, 00:39:47.635 "hosts": [] 00:39:47.635 }, 00:39:47.635 { 00:39:47.635 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:39:47.635 "subtype": "NVMe", 00:39:47.635 "listen_addresses": [ 00:39:47.635 { 00:39:47.635 "trtype": "TCP", 00:39:47.635 "adrfam": "IPv4", 00:39:47.635 "traddr": "10.0.0.2", 00:39:47.635 "trsvcid": "4420" 00:39:47.635 } 00:39:47.635 ], 00:39:47.635 "allow_any_host": true, 00:39:47.635 "hosts": [], 00:39:47.635 "serial_number": "SPDK00000000000001", 00:39:47.635 "model_number": "SPDK bdev Controller", 00:39:47.635 "max_namespaces": 1, 00:39:47.635 "min_cntlid": 1, 00:39:47.635 "max_cntlid": 65519, 00:39:47.635 "namespaces": [ 00:39:47.635 { 00:39:47.635 "nsid": 1, 00:39:47.635 "bdev_name": "Nvme0n1", 00:39:47.635 "name": "Nvme0n1", 00:39:47.635 "nguid": "36344730526054870025384500000044", 00:39:47.635 "uuid": "36344730-5260-5487-0025-384500000044" 00:39:47.635 } 00:39:47.635 ] 00:39:47.635 } 00:39:47.635 ] 00:39:47.635 07:34:58 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:47.635 07:34:58 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:39:47.635 07:34:58 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:39:47.635 07:34:58 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:39:47.897 07:34:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:39:47.897 07:34:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:39:47.897 07:34:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:39:47.897 07:34:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:39:48.164 07:34:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:39:48.164 07:34:59 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:39:48.164 07:34:59 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:39:48.164 07:34:59 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:48.164 07:34:59 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.164 07:34:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:48.164 07:34:59 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.164 07:34:59 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:39:48.164 07:34:59 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:39:48.164 07:34:59 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:48.164 07:34:59 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:39:48.164 07:34:59 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:48.164 07:34:59 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:39:48.164 07:34:59 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:48.164 07:34:59 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:48.164 rmmod nvme_tcp 00:39:48.164 rmmod nvme_fabrics 00:39:48.164 rmmod nvme_keyring 00:39:48.428 07:34:59 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:48.428 07:34:59 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:39:48.428 07:34:59 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:39:48.428 07:34:59 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2692470 ']' 00:39:48.428 07:34:59 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2692470 00:39:48.428 07:34:59 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 2692470 ']' 00:39:48.428 07:34:59 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 2692470 00:39:48.428 07:34:59 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:39:48.428 07:34:59 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:48.428 07:34:59 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2692470 00:39:48.428 07:34:59 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:48.428 07:34:59 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:48.428 07:34:59 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2692470' 00:39:48.428 killing process with pid 2692470 00:39:48.428 07:34:59 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 2692470 00:39:48.428 07:34:59 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 2692470 00:39:48.690 07:34:59 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:48.690 07:34:59 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:48.690 07:34:59 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:48.690 07:34:59 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:39:48.690 07:34:59 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:39:48.690 07:34:59 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:48.690 07:34:59 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:39:48.690 07:34:59 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:48.690 07:34:59 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:48.690 07:34:59 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:48.690 07:34:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:48.690 07:34:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:51.247 07:35:01 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:51.247 00:39:51.247 real 0m13.451s 00:39:51.247 user 0m11.096s 00:39:51.247 sys 0m6.819s 00:39:51.247 07:35:01 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:51.247 07:35:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:51.247 ************************************ 00:39:51.247 END TEST nvmf_identify_passthru 00:39:51.247 ************************************ 00:39:51.247 07:35:01 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:39:51.247 07:35:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:51.247 07:35:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:51.247 07:35:01 -- common/autotest_common.sh@10 -- # set +x 00:39:51.247 ************************************ 00:39:51.247 START TEST nvmf_dif 00:39:51.247 ************************************ 00:39:51.247 07:35:01 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:39:51.247 * Looking for test storage... 00:39:51.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:51.247 07:35:02 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:51.247 07:35:02 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:39:51.247 07:35:02 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:51.247 07:35:02 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:51.247 07:35:02 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:51.247 07:35:02 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:51.247 07:35:02 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:51.247 07:35:02 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:39:51.247 07:35:02 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:39:51.247 07:35:02 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:39:51.247 07:35:02 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:39:51.247 07:35:02 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:39:51.247 07:35:02 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:39:51.247 07:35:02 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:39:51.247 07:35:02 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:51.247 07:35:02 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:39:51.247 07:35:02 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:39:51.247 07:35:02 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:51.247 07:35:02 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:51.247 07:35:02 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:39:51.247 07:35:02 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:39:51.247 07:35:02 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:51.247 07:35:02 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:39:51.247 07:35:02 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:39:51.247 07:35:02 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:39:51.247 07:35:02 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:39:51.247 07:35:02 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:51.247 07:35:02 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:39:51.247 07:35:02 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:39:51.247 07:35:02 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:51.247 07:35:02 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:51.247 07:35:02 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:39:51.247 07:35:02 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:51.247 07:35:02 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:51.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:51.247 --rc genhtml_branch_coverage=1 00:39:51.247 --rc genhtml_function_coverage=1 00:39:51.247 --rc genhtml_legend=1 00:39:51.247 --rc geninfo_all_blocks=1 00:39:51.247 --rc geninfo_unexecuted_blocks=1 00:39:51.247 00:39:51.247 ' 00:39:51.247 07:35:02 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:51.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:51.247 --rc genhtml_branch_coverage=1 00:39:51.247 --rc genhtml_function_coverage=1 00:39:51.247 --rc genhtml_legend=1 00:39:51.247 --rc geninfo_all_blocks=1 00:39:51.247 --rc geninfo_unexecuted_blocks=1 00:39:51.247 00:39:51.247 ' 00:39:51.247 07:35:02 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:51.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:51.247 --rc genhtml_branch_coverage=1 00:39:51.247 --rc genhtml_function_coverage=1 00:39:51.247 --rc genhtml_legend=1 00:39:51.247 --rc geninfo_all_blocks=1 00:39:51.247 --rc geninfo_unexecuted_blocks=1 00:39:51.247 00:39:51.247 ' 00:39:51.247 07:35:02 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:51.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:51.247 --rc genhtml_branch_coverage=1 00:39:51.247 --rc genhtml_function_coverage=1 00:39:51.247 --rc genhtml_legend=1 00:39:51.247 --rc geninfo_all_blocks=1 00:39:51.247 --rc geninfo_unexecuted_blocks=1 00:39:51.247 00:39:51.247 ' 00:39:51.247 07:35:02 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:51.247 07:35:02 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:39:51.247 07:35:02 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:51.247 07:35:02 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:51.247 07:35:02 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:51.247 07:35:02 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:51.247 07:35:02 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:51.247 07:35:02 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:51.247 07:35:02 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:51.247 07:35:02 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:51.247 07:35:02 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:51.247 07:35:02 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:51.247 07:35:02 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:51.247 07:35:02 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:51.247 07:35:02 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:51.247 07:35:02 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:51.247 07:35:02 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:51.247 07:35:02 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:51.247 07:35:02 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:51.247 07:35:02 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:39:51.247 07:35:02 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:51.247 07:35:02 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:51.247 07:35:02 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:51.247 07:35:02 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:51.247 07:35:02 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:51.247 07:35:02 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:51.247 07:35:02 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:39:51.247 07:35:02 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:51.247 07:35:02 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:39:51.247 07:35:02 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:51.247 07:35:02 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:51.247 07:35:02 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:51.247 07:35:02 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:51.247 07:35:02 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:51.247 07:35:02 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:51.247 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:51.247 07:35:02 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:51.247 07:35:02 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:51.247 07:35:02 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:51.247 07:35:02 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:39:51.247 07:35:02 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:39:51.247 07:35:02 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:39:51.247 07:35:02 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:39:51.247 07:35:02 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:39:51.247 07:35:02 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:51.247 07:35:02 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:51.247 07:35:02 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:51.247 07:35:02 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:51.247 07:35:02 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:51.247 07:35:02 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:51.247 07:35:02 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:51.247 07:35:02 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:51.247 07:35:02 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:51.247 07:35:02 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:51.247 07:35:02 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:39:51.247 07:35:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:39:59.384 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:39:59.384 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:59.384 07:35:09 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:39:59.385 Found net devices under 0000:4b:00.0: cvl_0_0 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:39:59.385 Found net devices under 0000:4b:00.1: cvl_0_1 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:59.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:59.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:39:59.385 00:39:59.385 --- 10.0.0.2 ping statistics --- 00:39:59.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:59.385 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:59.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:59.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:39:59.385 00:39:59.385 --- 10.0.0.1 ping statistics --- 00:39:59.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:59.385 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:39:59.385 07:35:09 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:01.931 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:40:01.931 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:40:01.931 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:40:01.931 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:40:01.931 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:40:01.931 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:40:01.931 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:40:01.931 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:40:01.931 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:40:01.931 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:40:01.931 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:40:01.931 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:40:01.931 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:40:01.931 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:40:01.931 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:40:01.931 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:40:01.931 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:40:01.931 07:35:13 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:01.931 07:35:13 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:01.931 07:35:13 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:01.931 07:35:13 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:01.931 07:35:13 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:01.931 07:35:13 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:01.931 07:35:13 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:40:01.931 07:35:13 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:40:01.931 07:35:13 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:01.931 07:35:13 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:01.931 07:35:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:01.931 07:35:13 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2698522 00:40:01.931 07:35:13 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2698522 00:40:01.931 07:35:13 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:40:01.931 07:35:13 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 2698522 ']' 00:40:01.931 07:35:13 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:01.931 07:35:13 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:01.931 07:35:13 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:01.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:01.931 07:35:13 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:01.931 07:35:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:02.192 [2024-11-27 07:35:13.164686] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:40:02.192 [2024-11-27 07:35:13.164748] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:02.192 [2024-11-27 07:35:13.263692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:02.192 [2024-11-27 07:35:13.316045] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:02.192 [2024-11-27 07:35:13.316099] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:02.192 [2024-11-27 07:35:13.316109] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:02.192 [2024-11-27 07:35:13.316116] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:02.192 [2024-11-27 07:35:13.316122] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:02.192 [2024-11-27 07:35:13.316873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:02.763 07:35:13 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:02.763 07:35:13 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:40:02.763 07:35:13 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:02.763 07:35:13 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:02.763 07:35:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:03.024 07:35:13 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:03.024 07:35:13 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:40:03.024 07:35:13 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:40:03.024 07:35:13 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:03.024 07:35:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:03.024 [2024-11-27 07:35:14.000323] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:03.024 07:35:14 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:03.024 07:35:14 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:40:03.024 07:35:14 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:03.024 07:35:14 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:03.024 07:35:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:03.024 ************************************ 00:40:03.024 START TEST fio_dif_1_default 00:40:03.024 ************************************ 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:03.024 bdev_null0 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:03.024 [2024-11-27 07:35:14.088692] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:03.024 { 00:40:03.024 "params": { 00:40:03.024 "name": "Nvme$subsystem", 00:40:03.024 "trtype": "$TEST_TRANSPORT", 00:40:03.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:03.024 "adrfam": "ipv4", 00:40:03.024 "trsvcid": "$NVMF_PORT", 00:40:03.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:03.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:03.024 "hdgst": ${hdgst:-false}, 00:40:03.024 "ddgst": ${ddgst:-false} 00:40:03.024 }, 00:40:03.024 "method": "bdev_nvme_attach_controller" 00:40:03.024 } 00:40:03.024 EOF 00:40:03.024 )") 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:03.024 "params": { 00:40:03.024 "name": "Nvme0", 00:40:03.024 "trtype": "tcp", 00:40:03.024 "traddr": "10.0.0.2", 00:40:03.024 "adrfam": "ipv4", 00:40:03.024 "trsvcid": "4420", 00:40:03.024 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:03.024 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:03.024 "hdgst": false, 00:40:03.024 "ddgst": false 00:40:03.024 }, 00:40:03.024 "method": "bdev_nvme_attach_controller" 00:40:03.024 }' 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:03.024 07:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:03.593 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:03.593 fio-3.35 00:40:03.593 Starting 1 thread 00:40:15.822 00:40:15.822 filename0: (groupid=0, jobs=1): err= 0: pid=2699104: Wed Nov 27 07:35:25 2024 00:40:15.822 read: IOPS=255, BW=1024KiB/s (1049kB/s)(10.0MiB/10016msec) 00:40:15.822 slat (nsec): min=5482, max=40341, avg=7325.61, stdev=2893.34 00:40:15.822 clat (usec): min=473, max=42713, avg=15605.25, stdev=19391.97 00:40:15.822 lat (usec): min=478, max=42722, avg=15612.58, stdev=19391.28 00:40:15.822 clat percentiles (usec): 00:40:15.822 | 1.00th=[ 594], 5.00th=[ 783], 10.00th=[ 816], 20.00th=[ 840], 00:40:15.822 | 30.00th=[ 873], 40.00th=[ 963], 50.00th=[ 1004], 60.00th=[ 1057], 00:40:15.822 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:15.822 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:40:15.822 | 99.99th=[42730] 00:40:15.822 bw ( KiB/s): min= 704, max= 4224, per=100.00%, avg=1024.00, stdev=847.02, samples=20 00:40:15.822 iops : min= 176, max= 1056, avg=256.00, stdev=211.76, samples=20 00:40:15.822 lat (usec) : 500=0.16%, 750=3.20%, 1000=45.79% 00:40:15.822 lat (msec) : 2=14.20%, 4=0.16%, 50=36.51% 00:40:15.822 cpu : usr=93.32%, sys=6.44%, ctx=16, majf=0, minf=242 00:40:15.822 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:15.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:15.822 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:15.822 issued rwts: total=2564,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:15.822 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:15.822 00:40:15.822 Run status group 0 (all jobs): 00:40:15.822 READ: bw=1024KiB/s (1049kB/s), 1024KiB/s-1024KiB/s (1049kB/s-1049kB/s), io=10.0MiB (10.5MB), run=10016-10016msec 00:40:15.822 07:35:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:40:15.822 07:35:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:40:15.822 07:35:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:40:15.822 07:35:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:15.822 07:35:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:40:15.822 07:35:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:15.822 07:35:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:15.822 07:35:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:15.822 07:35:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:15.823 00:40:15.823 real 0m11.329s 00:40:15.823 user 0m22.220s 00:40:15.823 sys 0m1.043s 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:15.823 ************************************ 00:40:15.823 END TEST fio_dif_1_default 00:40:15.823 ************************************ 00:40:15.823 07:35:25 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:40:15.823 07:35:25 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:15.823 07:35:25 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:15.823 07:35:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:15.823 ************************************ 00:40:15.823 START TEST fio_dif_1_multi_subsystems 00:40:15.823 ************************************ 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:15.823 bdev_null0 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:15.823 [2024-11-27 07:35:25.500133] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:15.823 bdev_null1 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:15.823 { 00:40:15.823 "params": { 00:40:15.823 "name": "Nvme$subsystem", 00:40:15.823 "trtype": "$TEST_TRANSPORT", 00:40:15.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:15.823 "adrfam": "ipv4", 00:40:15.823 "trsvcid": "$NVMF_PORT", 00:40:15.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:15.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:15.823 "hdgst": ${hdgst:-false}, 00:40:15.823 "ddgst": ${ddgst:-false} 00:40:15.823 }, 00:40:15.823 "method": "bdev_nvme_attach_controller" 00:40:15.823 } 00:40:15.823 EOF 00:40:15.823 )") 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:15.823 { 00:40:15.823 "params": { 00:40:15.823 "name": "Nvme$subsystem", 00:40:15.823 "trtype": "$TEST_TRANSPORT", 00:40:15.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:15.823 "adrfam": "ipv4", 00:40:15.823 "trsvcid": "$NVMF_PORT", 00:40:15.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:15.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:15.823 "hdgst": ${hdgst:-false}, 00:40:15.823 "ddgst": ${ddgst:-false} 00:40:15.823 }, 00:40:15.823 "method": "bdev_nvme_attach_controller" 00:40:15.823 } 00:40:15.823 EOF 00:40:15.823 )") 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:40:15.823 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:15.823 "params": { 00:40:15.823 "name": "Nvme0", 00:40:15.823 "trtype": "tcp", 00:40:15.823 "traddr": "10.0.0.2", 00:40:15.823 "adrfam": "ipv4", 00:40:15.823 "trsvcid": "4420", 00:40:15.823 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:15.823 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:15.823 "hdgst": false, 00:40:15.823 "ddgst": false 00:40:15.823 }, 00:40:15.823 "method": "bdev_nvme_attach_controller" 00:40:15.823 },{ 00:40:15.823 "params": { 00:40:15.823 "name": "Nvme1", 00:40:15.823 "trtype": "tcp", 00:40:15.824 "traddr": "10.0.0.2", 00:40:15.824 "adrfam": "ipv4", 00:40:15.824 "trsvcid": "4420", 00:40:15.824 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:15.824 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:15.824 "hdgst": false, 00:40:15.824 "ddgst": false 00:40:15.824 }, 00:40:15.824 "method": "bdev_nvme_attach_controller" 00:40:15.824 }' 00:40:15.824 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:15.824 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:15.824 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:15.824 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:15.824 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:40:15.824 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:15.824 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:15.824 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:15.824 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:15.824 07:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:15.824 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:15.824 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:15.824 fio-3.35 00:40:15.824 Starting 2 threads 00:40:25.833 00:40:25.833 filename0: (groupid=0, jobs=1): err= 0: pid=2701390: Wed Nov 27 07:35:36 2024 00:40:25.833 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10003msec) 00:40:25.833 slat (nsec): min=5481, max=32815, avg=6323.96, stdev=1507.00 00:40:25.833 clat (usec): min=593, max=42317, avg=21084.85, stdev=20179.27 00:40:25.833 lat (usec): min=599, max=42350, avg=21091.18, stdev=20179.28 00:40:25.833 clat percentiles (usec): 00:40:25.833 | 1.00th=[ 635], 5.00th=[ 693], 10.00th=[ 742], 20.00th=[ 832], 00:40:25.833 | 30.00th=[ 857], 40.00th=[ 881], 50.00th=[41157], 60.00th=[41157], 00:40:25.833 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:25.833 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:25.833 | 99.99th=[42206] 00:40:25.833 bw ( KiB/s): min= 672, max= 768, per=66.19%, avg=759.58, stdev=23.47, samples=19 00:40:25.833 iops : min= 168, max= 192, avg=189.89, stdev= 5.87, samples=19 00:40:25.833 lat (usec) : 750=11.18%, 1000=37.24% 00:40:25.833 lat (msec) : 2=1.37%, 50=50.21% 00:40:25.833 cpu : usr=95.47%, sys=4.32%, ctx=10, majf=0, minf=139 00:40:25.833 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:25.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:25.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:25.833 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:25.833 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:25.833 filename1: (groupid=0, jobs=1): err= 0: pid=2701391: Wed Nov 27 07:35:36 2024 00:40:25.833 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10018msec) 00:40:25.833 slat (nsec): min=5482, max=41926, avg=6494.17, stdev=1940.93 00:40:25.833 clat (usec): min=40832, max=42360, avg=41037.29, stdev=232.31 00:40:25.834 lat (usec): min=40838, max=42402, avg=41043.78, stdev=232.89 00:40:25.834 clat percentiles (usec): 00:40:25.834 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:40:25.834 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:25.834 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:40:25.834 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:25.834 | 99.99th=[42206] 00:40:25.834 bw ( KiB/s): min= 384, max= 416, per=33.84%, avg=388.80, stdev=11.72, samples=20 00:40:25.834 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:40:25.834 lat (msec) : 50=100.00% 00:40:25.834 cpu : usr=95.44%, sys=4.36%, ctx=13, majf=0, minf=112 00:40:25.834 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:25.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:25.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:25.834 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:25.834 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:25.834 00:40:25.834 Run status group 0 (all jobs): 00:40:25.834 READ: bw=1147KiB/s (1174kB/s), 390KiB/s-758KiB/s (399kB/s-776kB/s), io=11.2MiB (11.8MB), run=10003-10018msec 00:40:25.834 07:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:40:25.834 07:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:40:25.834 07:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:40:25.834 07:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:25.834 07:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:40:25.834 07:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:25.834 07:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:25.834 07:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:25.834 07:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:25.834 07:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:25.834 07:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:25.834 07:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:25.834 07:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:25.834 07:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:40:25.834 07:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:40:25.834 07:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:40:25.834 07:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:25.834 07:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:25.834 07:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:25.834 07:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:25.834 07:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:40:25.834 07:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:25.834 07:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:25.834 07:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:25.834 00:40:25.834 real 0m11.382s 00:40:25.834 user 0m34.501s 00:40:25.834 sys 0m1.249s 00:40:25.834 07:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:25.834 07:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:25.834 ************************************ 00:40:25.834 END TEST fio_dif_1_multi_subsystems 00:40:25.834 ************************************ 00:40:25.834 07:35:36 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:40:25.834 07:35:36 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:25.834 07:35:36 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:25.834 07:35:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:25.834 ************************************ 00:40:25.834 START TEST fio_dif_rand_params 00:40:25.834 ************************************ 00:40:25.834 07:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:40:25.834 07:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:40:25.834 07:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:40:25.834 07:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:40:25.834 07:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:40:25.834 07:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:40:25.834 07:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:40:25.834 07:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:40:25.834 07:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:40:25.834 07:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:40:25.834 07:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:25.834 07:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:40:25.834 07:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:40:25.834 07:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:40:25.834 07:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:25.834 07:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:25.834 bdev_null0 00:40:25.834 07:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:25.834 07:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:25.834 07:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:25.834 07:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:25.834 07:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:25.834 07:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:25.834 07:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:25.834 07:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:25.834 07:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:25.834 07:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:25.834 07:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:25.834 07:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:25.834 [2024-11-27 07:35:36.965256] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:25.834 07:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:25.834 07:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:40:25.834 07:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:40:25.834 07:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:25.834 07:35:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:40:25.834 07:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:25.834 07:35:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:40:25.834 07:35:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:25.835 07:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:25.835 07:35:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:25.835 { 00:40:25.835 "params": { 00:40:25.835 "name": "Nvme$subsystem", 00:40:25.835 "trtype": "$TEST_TRANSPORT", 00:40:25.835 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:25.835 "adrfam": "ipv4", 00:40:25.835 "trsvcid": "$NVMF_PORT", 00:40:25.835 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:25.835 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:25.835 "hdgst": ${hdgst:-false}, 00:40:25.835 "ddgst": ${ddgst:-false} 00:40:25.835 }, 00:40:25.835 "method": "bdev_nvme_attach_controller" 00:40:25.835 } 00:40:25.835 EOF 00:40:25.835 )") 00:40:25.835 07:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:40:25.835 07:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:40:25.835 07:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:25.835 07:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:40:25.835 07:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:40:25.835 07:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:40:25.835 07:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:25.835 07:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:40:25.835 07:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:40:25.835 07:35:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:40:25.835 07:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:25.835 07:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:25.835 07:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:25.835 07:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:40:25.835 07:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:40:25.835 07:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:25.835 07:35:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:40:25.835 07:35:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:40:25.835 07:35:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:25.835 "params": { 00:40:25.835 "name": "Nvme0", 00:40:25.835 "trtype": "tcp", 00:40:25.835 "traddr": "10.0.0.2", 00:40:25.835 "adrfam": "ipv4", 00:40:25.835 "trsvcid": "4420", 00:40:25.835 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:25.835 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:25.835 "hdgst": false, 00:40:25.835 "ddgst": false 00:40:25.835 }, 00:40:25.835 "method": "bdev_nvme_attach_controller" 00:40:25.835 }' 00:40:25.835 07:35:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:25.835 07:35:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:25.835 07:35:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:25.835 07:35:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:40:25.835 07:35:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:25.835 07:35:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:26.116 07:35:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:26.116 07:35:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:26.116 07:35:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:26.116 07:35:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:26.378 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:40:26.378 ... 00:40:26.378 fio-3.35 00:40:26.378 Starting 3 threads 00:40:33.039 00:40:33.039 filename0: (groupid=0, jobs=1): err= 0: pid=2703601: Wed Nov 27 07:35:43 2024 00:40:33.039 read: IOPS=315, BW=39.4MiB/s (41.3MB/s)(199MiB/5047msec) 00:40:33.039 slat (nsec): min=5599, max=41309, avg=8594.66, stdev=1893.02 00:40:33.039 clat (usec): min=5257, max=50716, avg=9476.90, stdev=3397.71 00:40:33.039 lat (usec): min=5273, max=50724, avg=9485.50, stdev=3397.70 00:40:33.039 clat percentiles (usec): 00:40:33.039 | 1.00th=[ 6063], 5.00th=[ 7439], 10.00th=[ 7767], 20.00th=[ 8356], 00:40:33.039 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9634], 00:40:33.039 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[10421], 95.00th=[10683], 00:40:33.039 | 99.00th=[11600], 99.50th=[46924], 99.90th=[50594], 99.95th=[50594], 00:40:33.040 | 99.99th=[50594] 00:40:33.040 bw ( KiB/s): min=31232, max=45568, per=34.19%, avg=40678.40, stdev=3763.29, samples=10 00:40:33.040 iops : min= 244, max= 356, avg=317.80, stdev=29.40, samples=10 00:40:33.040 lat (msec) : 10=78.32%, 20=20.99%, 50=0.57%, 100=0.13% 00:40:33.040 cpu : usr=94.35%, sys=5.39%, ctx=7, majf=0, minf=100 00:40:33.040 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:33.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:33.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:33.040 issued rwts: total=1591,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:33.040 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:33.040 filename0: (groupid=0, jobs=1): err= 0: pid=2703602: Wed Nov 27 07:35:43 2024 00:40:33.040 read: IOPS=303, BW=37.9MiB/s (39.8MB/s)(191MiB/5046msec) 00:40:33.040 slat (nsec): min=5506, max=31009, avg=6208.07, stdev=1282.40 00:40:33.040 clat (usec): min=4718, max=88738, avg=9851.08, stdev=5315.86 00:40:33.040 lat (usec): min=4724, max=88744, avg=9857.29, stdev=5316.11 00:40:33.040 clat percentiles (usec): 00:40:33.040 | 1.00th=[ 5604], 5.00th=[ 7046], 10.00th=[ 7701], 20.00th=[ 8291], 00:40:33.040 | 30.00th=[ 8717], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[ 9765], 00:40:33.040 | 70.00th=[10028], 80.00th=[10421], 90.00th=[10814], 95.00th=[11338], 00:40:33.040 | 99.00th=[47973], 99.50th=[49546], 99.90th=[88605], 99.95th=[88605], 00:40:33.040 | 99.99th=[88605] 00:40:33.040 bw ( KiB/s): min=29696, max=44032, per=32.90%, avg=39142.40, stdev=3970.43, samples=10 00:40:33.040 iops : min= 232, max= 344, avg=305.80, stdev=31.02, samples=10 00:40:33.040 lat (msec) : 10=68.71%, 20=30.18%, 50=0.72%, 100=0.39% 00:40:33.040 cpu : usr=94.47%, sys=5.29%, ctx=10, majf=0, minf=120 00:40:33.040 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:33.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:33.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:33.040 issued rwts: total=1531,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:33.040 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:33.040 filename0: (groupid=0, jobs=1): err= 0: pid=2703604: Wed Nov 27 07:35:43 2024 00:40:33.040 read: IOPS=311, BW=38.9MiB/s (40.8MB/s)(196MiB/5045msec) 00:40:33.040 slat (nsec): min=5500, max=32682, avg=6225.74, stdev=1284.43 00:40:33.040 clat (usec): min=5343, max=50878, avg=9611.11, stdev=4242.88 00:40:33.040 lat (usec): min=5349, max=50885, avg=9617.34, stdev=4243.22 00:40:33.040 clat percentiles (usec): 00:40:33.040 | 1.00th=[ 6390], 5.00th=[ 7177], 10.00th=[ 7635], 20.00th=[ 8160], 00:40:33.040 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9634], 00:40:33.040 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10683], 95.00th=[11076], 00:40:33.040 | 99.00th=[46400], 99.50th=[49021], 99.90th=[51119], 99.95th=[51119], 00:40:33.040 | 99.99th=[51119] 00:40:33.040 bw ( KiB/s): min=30720, max=45056, per=33.72%, avg=40115.20, stdev=4024.35, samples=10 00:40:33.040 iops : min= 240, max= 352, avg=313.40, stdev=31.44, samples=10 00:40:33.040 lat (msec) : 10=75.14%, 20=23.77%, 50=0.76%, 100=0.32% 00:40:33.040 cpu : usr=93.99%, sys=5.79%, ctx=9, majf=0, minf=70 00:40:33.040 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:33.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:33.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:33.040 issued rwts: total=1569,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:33.040 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:33.040 00:40:33.040 Run status group 0 (all jobs): 00:40:33.040 READ: bw=116MiB/s (122MB/s), 37.9MiB/s-39.4MiB/s (39.8MB/s-41.3MB/s), io=586MiB (615MB), run=5045-5047msec 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:33.040 bdev_null0 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:33.040 [2024-11-27 07:35:43.299500] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:33.040 bdev_null1 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:33.040 bdev_null2 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:33.040 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:33.041 { 00:40:33.041 "params": { 00:40:33.041 "name": "Nvme$subsystem", 00:40:33.041 "trtype": "$TEST_TRANSPORT", 00:40:33.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:33.041 "adrfam": "ipv4", 00:40:33.041 "trsvcid": "$NVMF_PORT", 00:40:33.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:33.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:33.041 "hdgst": ${hdgst:-false}, 00:40:33.041 "ddgst": ${ddgst:-false} 00:40:33.041 }, 00:40:33.041 "method": "bdev_nvme_attach_controller" 00:40:33.041 } 00:40:33.041 EOF 00:40:33.041 )") 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:33.041 { 00:40:33.041 "params": { 00:40:33.041 "name": "Nvme$subsystem", 00:40:33.041 "trtype": "$TEST_TRANSPORT", 00:40:33.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:33.041 "adrfam": "ipv4", 00:40:33.041 "trsvcid": "$NVMF_PORT", 00:40:33.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:33.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:33.041 "hdgst": ${hdgst:-false}, 00:40:33.041 "ddgst": ${ddgst:-false} 00:40:33.041 }, 00:40:33.041 "method": "bdev_nvme_attach_controller" 00:40:33.041 } 00:40:33.041 EOF 00:40:33.041 )") 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:33.041 { 00:40:33.041 "params": { 00:40:33.041 "name": "Nvme$subsystem", 00:40:33.041 "trtype": "$TEST_TRANSPORT", 00:40:33.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:33.041 "adrfam": "ipv4", 00:40:33.041 "trsvcid": "$NVMF_PORT", 00:40:33.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:33.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:33.041 "hdgst": ${hdgst:-false}, 00:40:33.041 "ddgst": ${ddgst:-false} 00:40:33.041 }, 00:40:33.041 "method": "bdev_nvme_attach_controller" 00:40:33.041 } 00:40:33.041 EOF 00:40:33.041 )") 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:33.041 "params": { 00:40:33.041 "name": "Nvme0", 00:40:33.041 "trtype": "tcp", 00:40:33.041 "traddr": "10.0.0.2", 00:40:33.041 "adrfam": "ipv4", 00:40:33.041 "trsvcid": "4420", 00:40:33.041 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:33.041 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:33.041 "hdgst": false, 00:40:33.041 "ddgst": false 00:40:33.041 }, 00:40:33.041 "method": "bdev_nvme_attach_controller" 00:40:33.041 },{ 00:40:33.041 "params": { 00:40:33.041 "name": "Nvme1", 00:40:33.041 "trtype": "tcp", 00:40:33.041 "traddr": "10.0.0.2", 00:40:33.041 "adrfam": "ipv4", 00:40:33.041 "trsvcid": "4420", 00:40:33.041 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:33.041 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:33.041 "hdgst": false, 00:40:33.041 "ddgst": false 00:40:33.041 }, 00:40:33.041 "method": "bdev_nvme_attach_controller" 00:40:33.041 },{ 00:40:33.041 "params": { 00:40:33.041 "name": "Nvme2", 00:40:33.041 "trtype": "tcp", 00:40:33.041 "traddr": "10.0.0.2", 00:40:33.041 "adrfam": "ipv4", 00:40:33.041 "trsvcid": "4420", 00:40:33.041 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:40:33.041 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:40:33.041 "hdgst": false, 00:40:33.041 "ddgst": false 00:40:33.041 }, 00:40:33.041 "method": "bdev_nvme_attach_controller" 00:40:33.041 }' 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:33.041 07:35:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:33.041 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:40:33.041 ... 00:40:33.041 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:40:33.041 ... 00:40:33.041 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:40:33.041 ... 00:40:33.041 fio-3.35 00:40:33.041 Starting 24 threads 00:40:45.277 00:40:45.277 filename0: (groupid=0, jobs=1): err= 0: pid=2705095: Wed Nov 27 07:35:54 2024 00:40:45.277 read: IOPS=676, BW=2707KiB/s (2772kB/s)(26.5MiB/10014msec) 00:40:45.277 slat (usec): min=5, max=108, avg=20.67, stdev=17.19 00:40:45.277 clat (usec): min=4827, max=32702, avg=23476.82, stdev=2245.22 00:40:45.277 lat (usec): min=4851, max=32724, avg=23497.49, stdev=2245.23 00:40:45.277 clat percentiles (usec): 00:40:45.277 | 1.00th=[10290], 5.00th=[22152], 10.00th=[22938], 20.00th=[23462], 00:40:45.277 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:40:45.277 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24249], 95.00th=[24511], 00:40:45.277 | 99.00th=[28705], 99.50th=[30016], 99.90th=[31589], 99.95th=[32637], 00:40:45.277 | 99.99th=[32637] 00:40:45.277 bw ( KiB/s): min= 2560, max= 3216, per=4.20%, avg=2704.00, stdev=127.47, samples=20 00:40:45.277 iops : min= 640, max= 804, avg=676.00, stdev=31.87, samples=20 00:40:45.277 lat (msec) : 10=0.90%, 20=3.07%, 50=96.03% 00:40:45.277 cpu : usr=99.01%, sys=0.71%, ctx=13, majf=0, minf=51 00:40:45.277 IO depths : 1=5.3%, 2=11.0%, 4=23.5%, 8=53.1%, 16=7.3%, 32=0.0%, >=64=0.0% 00:40:45.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.277 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.277 issued rwts: total=6776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:45.277 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:45.277 filename0: (groupid=0, jobs=1): err= 0: pid=2705096: Wed Nov 27 07:35:54 2024 00:40:45.277 read: IOPS=668, BW=2673KiB/s (2737kB/s)(26.1MiB/10007msec) 00:40:45.277 slat (usec): min=5, max=124, avg=28.28, stdev=20.10 00:40:45.277 clat (usec): min=15618, max=31864, avg=23700.15, stdev=765.52 00:40:45.277 lat (usec): min=15625, max=31907, avg=23728.44, stdev=763.35 00:40:45.277 clat percentiles (usec): 00:40:45.277 | 1.00th=[22676], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:40:45.277 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:40:45.277 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:40:45.277 | 99.00th=[25035], 99.50th=[25297], 99.90th=[30540], 99.95th=[31065], 00:40:45.277 | 99.99th=[31851] 00:40:45.277 bw ( KiB/s): min= 2560, max= 2688, per=4.15%, avg=2674.16, stdev=39.40, samples=19 00:40:45.277 iops : min= 640, max= 672, avg=668.47, stdev= 9.88, samples=19 00:40:45.277 lat (msec) : 20=0.66%, 50=99.34% 00:40:45.277 cpu : usr=98.91%, sys=0.82%, ctx=13, majf=0, minf=35 00:40:45.277 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:40:45.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.277 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.277 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:45.277 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:45.277 filename0: (groupid=0, jobs=1): err= 0: pid=2705097: Wed Nov 27 07:35:54 2024 00:40:45.277 read: IOPS=675, BW=2702KiB/s (2767kB/s)(26.4MiB/10019msec) 00:40:45.277 slat (usec): min=5, max=103, avg=26.24, stdev=17.56 00:40:45.277 clat (usec): min=3698, max=25430, avg=23461.68, stdev=2160.06 00:40:45.277 lat (usec): min=3722, max=25436, avg=23487.92, stdev=2160.82 00:40:45.277 clat percentiles (usec): 00:40:45.277 | 1.00th=[ 8029], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:40:45.277 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:40:45.277 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:40:45.277 | 99.00th=[24773], 99.50th=[25035], 99.90th=[25297], 99.95th=[25297], 00:40:45.277 | 99.99th=[25560] 00:40:45.277 bw ( KiB/s): min= 2560, max= 3200, per=4.19%, avg=2700.80, stdev=123.89, samples=20 00:40:45.277 iops : min= 640, max= 800, avg=675.20, stdev=30.97, samples=20 00:40:45.277 lat (msec) : 4=0.21%, 10=0.98%, 20=0.71%, 50=98.11% 00:40:45.277 cpu : usr=98.72%, sys=1.00%, ctx=26, majf=0, minf=40 00:40:45.277 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:45.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.277 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.277 issued rwts: total=6768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:45.277 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:45.277 filename0: (groupid=0, jobs=1): err= 0: pid=2705098: Wed Nov 27 07:35:54 2024 00:40:45.277 read: IOPS=668, BW=2674KiB/s (2739kB/s)(26.1MiB/10003msec) 00:40:45.277 slat (usec): min=5, max=117, avg=30.26, stdev=19.15 00:40:45.277 clat (usec): min=5941, max=43878, avg=23640.89, stdev=1607.83 00:40:45.277 lat (usec): min=5947, max=43899, avg=23671.15, stdev=1608.07 00:40:45.277 clat percentiles (usec): 00:40:45.277 | 1.00th=[22152], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:40:45.277 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:40:45.277 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:40:45.277 | 99.00th=[25035], 99.50th=[28967], 99.90th=[43779], 99.95th=[43779], 00:40:45.277 | 99.99th=[43779] 00:40:45.277 bw ( KiB/s): min= 2436, max= 2704, per=4.13%, avg=2660.95, stdev=67.87, samples=19 00:40:45.277 iops : min= 609, max= 676, avg=665.21, stdev=16.96, samples=19 00:40:45.277 lat (msec) : 10=0.24%, 20=0.75%, 50=99.01% 00:40:45.277 cpu : usr=98.40%, sys=1.16%, ctx=52, majf=0, minf=32 00:40:45.277 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:40:45.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.277 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.277 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:45.277 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:45.277 filename0: (groupid=0, jobs=1): err= 0: pid=2705099: Wed Nov 27 07:35:54 2024 00:40:45.277 read: IOPS=665, BW=2662KiB/s (2726kB/s)(26.0MiB/10003msec) 00:40:45.277 slat (usec): min=5, max=116, avg=25.21, stdev=20.70 00:40:45.278 clat (usec): min=5895, max=43847, avg=23791.06, stdev=2871.65 00:40:45.278 lat (usec): min=5901, max=43867, avg=23816.27, stdev=2871.81 00:40:45.278 clat percentiles (usec): 00:40:45.278 | 1.00th=[14091], 5.00th=[21103], 10.00th=[22938], 20.00th=[23200], 00:40:45.278 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:40:45.278 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[27395], 00:40:45.278 | 99.00th=[35914], 99.50th=[38536], 99.90th=[43779], 99.95th=[43779], 00:40:45.278 | 99.99th=[43779] 00:40:45.278 bw ( KiB/s): min= 2432, max= 2752, per=4.11%, avg=2647.47, stdev=90.69, samples=19 00:40:45.278 iops : min= 608, max= 688, avg=661.84, stdev=22.70, samples=19 00:40:45.278 lat (msec) : 10=0.18%, 20=3.77%, 50=96.05% 00:40:45.278 cpu : usr=98.53%, sys=0.92%, ctx=120, majf=0, minf=33 00:40:45.278 IO depths : 1=4.9%, 2=10.2%, 4=22.3%, 8=54.7%, 16=7.9%, 32=0.0%, >=64=0.0% 00:40:45.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.278 complete : 0=0.0%, 4=93.5%, 8=1.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.278 issued rwts: total=6658,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:45.278 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:45.278 filename0: (groupid=0, jobs=1): err= 0: pid=2705100: Wed Nov 27 07:35:54 2024 00:40:45.278 read: IOPS=672, BW=2692KiB/s (2756kB/s)(26.3MiB/10006msec) 00:40:45.278 slat (nsec): min=5748, max=57044, avg=10145.02, stdev=6720.20 00:40:45.278 clat (usec): min=6924, max=31849, avg=23692.74, stdev=1715.51 00:40:45.278 lat (usec): min=6935, max=31879, avg=23702.88, stdev=1715.31 00:40:45.278 clat percentiles (usec): 00:40:45.278 | 1.00th=[13698], 5.00th=[22938], 10.00th=[23462], 20.00th=[23725], 00:40:45.278 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:40:45.278 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24511], 00:40:45.278 | 99.00th=[25035], 99.50th=[26084], 99.90th=[29492], 99.95th=[31851], 00:40:45.278 | 99.99th=[31851] 00:40:45.278 bw ( KiB/s): min= 2560, max= 3054, per=4.18%, avg=2693.79, stdev=96.21, samples=19 00:40:45.278 iops : min= 640, max= 763, avg=673.42, stdev=23.95, samples=19 00:40:45.278 lat (msec) : 10=0.86%, 20=0.88%, 50=98.26% 00:40:45.278 cpu : usr=98.71%, sys=0.88%, ctx=66, majf=0, minf=46 00:40:45.278 IO depths : 1=5.8%, 2=12.0%, 4=24.7%, 8=50.7%, 16=6.7%, 32=0.0%, >=64=0.0% 00:40:45.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.278 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.278 issued rwts: total=6733,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:45.278 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:45.278 filename0: (groupid=0, jobs=1): err= 0: pid=2705101: Wed Nov 27 07:35:54 2024 00:40:45.278 read: IOPS=668, BW=2673KiB/s (2737kB/s)(26.1MiB/10007msec) 00:40:45.278 slat (usec): min=5, max=115, avg=23.01, stdev=20.69 00:40:45.278 clat (usec): min=16293, max=31111, avg=23756.73, stdev=871.43 00:40:45.278 lat (usec): min=16300, max=31118, avg=23779.74, stdev=868.86 00:40:45.278 clat percentiles (usec): 00:40:45.278 | 1.00th=[22414], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:40:45.278 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:40:45.278 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:40:45.278 | 99.00th=[25035], 99.50th=[25297], 99.90th=[31065], 99.95th=[31065], 00:40:45.278 | 99.99th=[31065] 00:40:45.278 bw ( KiB/s): min= 2560, max= 2816, per=4.15%, avg=2674.21, stdev=58.67, samples=19 00:40:45.278 iops : min= 640, max= 704, avg=668.53, stdev=14.66, samples=19 00:40:45.278 lat (msec) : 20=0.90%, 50=99.10% 00:40:45.278 cpu : usr=98.86%, sys=0.78%, ctx=83, majf=0, minf=51 00:40:45.278 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:40:45.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.278 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.278 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:45.278 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:45.278 filename0: (groupid=0, jobs=1): err= 0: pid=2705102: Wed Nov 27 07:35:54 2024 00:40:45.278 read: IOPS=667, BW=2672KiB/s (2736kB/s)(26.1MiB/10012msec) 00:40:45.278 slat (nsec): min=5690, max=66671, avg=13845.23, stdev=8955.72 00:40:45.278 clat (usec): min=7174, max=31376, avg=23808.98, stdev=1003.40 00:40:45.278 lat (usec): min=7186, max=31393, avg=23822.82, stdev=1003.29 00:40:45.278 clat percentiles (usec): 00:40:45.278 | 1.00th=[21627], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:40:45.278 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:40:45.278 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24249], 95.00th=[24511], 00:40:45.278 | 99.00th=[25035], 99.50th=[29230], 99.90th=[31327], 99.95th=[31327], 00:40:45.278 | 99.99th=[31327] 00:40:45.278 bw ( KiB/s): min= 2560, max= 2688, per=4.14%, avg=2667.16, stdev=45.69, samples=19 00:40:45.278 iops : min= 640, max= 672, avg=666.74, stdev=11.42, samples=19 00:40:45.278 lat (msec) : 10=0.04%, 20=0.75%, 50=99.21% 00:40:45.278 cpu : usr=99.05%, sys=0.67%, ctx=12, majf=0, minf=30 00:40:45.278 IO depths : 1=5.9%, 2=12.1%, 4=24.8%, 8=50.5%, 16=6.6%, 32=0.0%, >=64=0.0% 00:40:45.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.278 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.278 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:45.278 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:45.278 filename1: (groupid=0, jobs=1): err= 0: pid=2705103: Wed Nov 27 07:35:54 2024 00:40:45.278 read: IOPS=676, BW=2706KiB/s (2771kB/s)(26.4MiB/10004msec) 00:40:45.278 slat (nsec): min=5678, max=73467, avg=9975.46, stdev=6102.30 00:40:45.278 clat (usec): min=3644, max=25444, avg=23565.64, stdev=2224.22 00:40:45.278 lat (usec): min=3657, max=25451, avg=23575.62, stdev=2223.63 00:40:45.278 clat percentiles (usec): 00:40:45.278 | 1.00th=[ 9896], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:40:45.278 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:40:45.278 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24511], 00:40:45.278 | 99.00th=[24773], 99.50th=[25035], 99.90th=[25297], 99.95th=[25297], 00:40:45.278 | 99.99th=[25560] 00:40:45.278 bw ( KiB/s): min= 2560, max= 3200, per=4.21%, avg=2708.21, stdev=122.65, samples=19 00:40:45.278 iops : min= 640, max= 800, avg=677.05, stdev=30.66, samples=19 00:40:45.278 lat (msec) : 4=0.21%, 10=0.84%, 20=1.34%, 50=97.61% 00:40:45.278 cpu : usr=98.89%, sys=0.84%, ctx=12, majf=0, minf=39 00:40:45.278 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:40:45.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.278 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.278 issued rwts: total=6768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:45.278 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:45.278 filename1: (groupid=0, jobs=1): err= 0: pid=2705104: Wed Nov 27 07:35:54 2024 00:40:45.278 read: IOPS=687, BW=2749KiB/s (2815kB/s)(26.9MiB/10018msec) 00:40:45.278 slat (nsec): min=5670, max=54618, avg=9704.60, stdev=5194.56 00:40:45.278 clat (usec): min=4434, max=40702, avg=23187.55, stdev=3176.72 00:40:45.278 lat (usec): min=4451, max=40708, avg=23197.25, stdev=3176.94 00:40:45.278 clat percentiles (usec): 00:40:45.278 | 1.00th=[10028], 5.00th=[15270], 10.00th=[20579], 20.00th=[23462], 00:40:45.278 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:40:45.278 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:40:45.278 | 99.00th=[30802], 99.50th=[32900], 99.90th=[40633], 99.95th=[40633], 00:40:45.278 | 99.99th=[40633] 00:40:45.278 bw ( KiB/s): min= 2560, max= 3168, per=4.27%, avg=2750.40, stdev=159.46, samples=20 00:40:45.278 iops : min= 640, max= 792, avg=687.60, stdev=39.87, samples=20 00:40:45.278 lat (msec) : 10=1.06%, 20=7.91%, 50=91.03% 00:40:45.278 cpu : usr=98.28%, sys=1.27%, ctx=79, majf=0, minf=54 00:40:45.278 IO depths : 1=4.4%, 2=9.6%, 4=21.6%, 8=56.2%, 16=8.2%, 32=0.0%, >=64=0.0% 00:40:45.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.278 complete : 0=0.0%, 4=93.2%, 8=1.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.278 issued rwts: total=6886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:45.278 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:45.278 filename1: (groupid=0, jobs=1): err= 0: pid=2705105: Wed Nov 27 07:35:54 2024 00:40:45.278 read: IOPS=666, BW=2665KiB/s (2729kB/s)(26.1MiB/10015msec) 00:40:45.278 slat (usec): min=5, max=103, avg=16.11, stdev=13.27 00:40:45.278 clat (usec): min=9751, max=33938, avg=23887.12, stdev=1274.98 00:40:45.278 lat (usec): min=9786, max=33948, avg=23903.24, stdev=1274.32 00:40:45.278 clat percentiles (usec): 00:40:45.278 | 1.00th=[19006], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:40:45.278 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:40:45.278 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:40:45.278 | 99.00th=[29492], 99.50th=[31065], 99.90th=[33817], 99.95th=[33817], 00:40:45.278 | 99.99th=[33817] 00:40:45.278 bw ( KiB/s): min= 2554, max= 2688, per=4.13%, avg=2660.42, stdev=54.11, samples=19 00:40:45.278 iops : min= 638, max= 672, avg=665.05, stdev=13.57, samples=19 00:40:45.278 lat (msec) : 10=0.03%, 20=1.08%, 50=98.89% 00:40:45.278 cpu : usr=98.44%, sys=1.06%, ctx=223, majf=0, minf=31 00:40:45.278 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:40:45.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.278 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.278 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:45.278 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:45.278 filename1: (groupid=0, jobs=1): err= 0: pid=2705106: Wed Nov 27 07:35:54 2024 00:40:45.278 read: IOPS=689, BW=2759KiB/s (2826kB/s)(27.0MiB/10008msec) 00:40:45.278 slat (nsec): min=5680, max=71207, avg=12908.08, stdev=8657.90 00:40:45.278 clat (usec): min=7389, max=42479, avg=23103.04, stdev=3239.92 00:40:45.278 lat (usec): min=7405, max=42488, avg=23115.95, stdev=3241.29 00:40:45.278 clat percentiles (usec): 00:40:45.278 | 1.00th=[12387], 5.00th=[15795], 10.00th=[17957], 20.00th=[23462], 00:40:45.278 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:40:45.278 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:40:45.278 | 99.00th=[31065], 99.50th=[36963], 99.90th=[41681], 99.95th=[42206], 00:40:45.278 | 99.99th=[42730] 00:40:45.278 bw ( KiB/s): min= 2560, max= 3424, per=4.30%, avg=2765.47, stdev=194.16, samples=19 00:40:45.278 iops : min= 640, max= 856, avg=691.37, stdev=48.54, samples=19 00:40:45.278 lat (msec) : 10=0.41%, 20=11.86%, 50=87.73% 00:40:45.278 cpu : usr=98.41%, sys=1.08%, ctx=121, majf=0, minf=40 00:40:45.278 IO depths : 1=1.7%, 2=6.4%, 4=21.2%, 8=59.9%, 16=10.9%, 32=0.0%, >=64=0.0% 00:40:45.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.279 complete : 0=0.0%, 4=93.4%, 8=1.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.279 issued rwts: total=6904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:45.279 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:45.279 filename1: (groupid=0, jobs=1): err= 0: pid=2705107: Wed Nov 27 07:35:54 2024 00:40:45.279 read: IOPS=665, BW=2662KiB/s (2726kB/s)(26.0MiB/10005msec) 00:40:45.279 slat (usec): min=5, max=108, avg=24.07, stdev=19.73 00:40:45.279 clat (usec): min=5841, max=51791, avg=23833.96, stdev=3055.46 00:40:45.279 lat (usec): min=5866, max=51813, avg=23858.03, stdev=3055.80 00:40:45.279 clat percentiles (usec): 00:40:45.279 | 1.00th=[14615], 5.00th=[19268], 10.00th=[22938], 20.00th=[23200], 00:40:45.279 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:40:45.279 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[28705], 00:40:45.279 | 99.00th=[35914], 99.50th=[39060], 99.90th=[41681], 99.95th=[51643], 00:40:45.279 | 99.99th=[51643] 00:40:45.279 bw ( KiB/s): min= 2404, max= 2792, per=4.12%, avg=2652.53, stdev=82.39, samples=19 00:40:45.279 iops : min= 601, max= 698, avg=663.11, stdev=20.59, samples=19 00:40:45.279 lat (msec) : 10=0.39%, 20=5.39%, 50=94.16%, 100=0.06% 00:40:45.279 cpu : usr=99.01%, sys=0.71%, ctx=12, majf=0, minf=31 00:40:45.279 IO depths : 1=3.8%, 2=8.6%, 4=21.1%, 8=57.4%, 16=9.1%, 32=0.0%, >=64=0.0% 00:40:45.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.279 complete : 0=0.0%, 4=93.2%, 8=1.5%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.279 issued rwts: total=6658,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:45.279 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:45.279 filename1: (groupid=0, jobs=1): err= 0: pid=2705108: Wed Nov 27 07:35:54 2024 00:40:45.279 read: IOPS=670, BW=2681KiB/s (2746kB/s)(26.2MiB/10013msec) 00:40:45.279 slat (usec): min=5, max=116, avg=24.95, stdev=22.01 00:40:45.279 clat (usec): min=6735, max=42473, avg=23639.11, stdev=2214.37 00:40:45.279 lat (usec): min=6754, max=42480, avg=23664.05, stdev=2214.83 00:40:45.279 clat percentiles (usec): 00:40:45.279 | 1.00th=[14353], 5.00th=[21627], 10.00th=[22938], 20.00th=[23200], 00:40:45.279 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:40:45.279 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:40:45.279 | 99.00th=[32900], 99.50th=[35390], 99.90th=[41157], 99.95th=[41681], 00:40:45.279 | 99.99th=[42730] 00:40:45.279 bw ( KiB/s): min= 2554, max= 2784, per=4.16%, avg=2677.79, stdev=50.41, samples=19 00:40:45.279 iops : min= 638, max= 696, avg=669.37, stdev=12.65, samples=19 00:40:45.279 lat (msec) : 10=0.06%, 20=3.40%, 50=96.54% 00:40:45.279 cpu : usr=98.87%, sys=0.85%, ctx=10, majf=0, minf=24 00:40:45.279 IO depths : 1=4.8%, 2=10.0%, 4=22.1%, 8=55.1%, 16=8.0%, 32=0.0%, >=64=0.0% 00:40:45.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.279 complete : 0=0.0%, 4=93.4%, 8=1.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.279 issued rwts: total=6712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:45.279 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:45.279 filename1: (groupid=0, jobs=1): err= 0: pid=2705109: Wed Nov 27 07:35:54 2024 00:40:45.279 read: IOPS=667, BW=2670KiB/s (2734kB/s)(26.1MiB/10014msec) 00:40:45.279 slat (usec): min=5, max=104, avg=20.41, stdev=17.11 00:40:45.279 clat (usec): min=9445, max=46211, avg=23807.52, stdev=3007.06 00:40:45.279 lat (usec): min=9452, max=46230, avg=23827.93, stdev=3008.15 00:40:45.279 clat percentiles (usec): 00:40:45.279 | 1.00th=[14484], 5.00th=[19006], 10.00th=[22676], 20.00th=[23462], 00:40:45.279 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:40:45.279 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[28181], 00:40:45.279 | 99.00th=[38011], 99.50th=[39060], 99.90th=[45876], 99.95th=[45876], 00:40:45.279 | 99.99th=[46400] 00:40:45.279 bw ( KiB/s): min= 2554, max= 2736, per=4.15%, avg=2672.47, stdev=44.30, samples=19 00:40:45.279 iops : min= 638, max= 684, avg=668.05, stdev=11.14, samples=19 00:40:45.279 lat (msec) : 10=0.15%, 20=5.63%, 50=94.23% 00:40:45.279 cpu : usr=98.61%, sys=0.96%, ctx=151, majf=0, minf=72 00:40:45.279 IO depths : 1=2.3%, 2=6.6%, 4=20.4%, 8=60.3%, 16=10.4%, 32=0.0%, >=64=0.0% 00:40:45.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.279 complete : 0=0.0%, 4=92.7%, 8=1.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.279 issued rwts: total=6684,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:45.279 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:45.279 filename1: (groupid=0, jobs=1): err= 0: pid=2705110: Wed Nov 27 07:35:54 2024 00:40:45.279 read: IOPS=657, BW=2630KiB/s (2693kB/s)(25.7MiB/10006msec) 00:40:45.279 slat (usec): min=5, max=473, avg=19.00, stdev=18.28 00:40:45.279 clat (usec): min=6371, max=46323, avg=24254.08, stdev=3520.11 00:40:45.279 lat (usec): min=6377, max=46346, avg=24273.09, stdev=3520.14 00:40:45.279 clat percentiles (usec): 00:40:45.279 | 1.00th=[15008], 5.00th=[19268], 10.00th=[21627], 20.00th=[23462], 00:40:45.279 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:40:45.279 | 70.00th=[24249], 80.00th=[24511], 90.00th=[27657], 95.00th=[31327], 00:40:45.279 | 99.00th=[36963], 99.50th=[39584], 99.90th=[44303], 99.95th=[44303], 00:40:45.279 | 99.99th=[46400] 00:40:45.279 bw ( KiB/s): min= 2464, max= 2720, per=4.08%, avg=2623.68, stdev=74.27, samples=19 00:40:45.279 iops : min= 616, max= 680, avg=655.89, stdev=18.56, samples=19 00:40:45.279 lat (msec) : 10=0.06%, 20=6.22%, 50=93.72% 00:40:45.279 cpu : usr=98.56%, sys=0.99%, ctx=70, majf=0, minf=33 00:40:45.279 IO depths : 1=0.2%, 2=0.3%, 4=3.1%, 8=79.9%, 16=16.5%, 32=0.0%, >=64=0.0% 00:40:45.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.279 complete : 0=0.0%, 4=89.5%, 8=8.8%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.279 issued rwts: total=6578,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:45.279 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:45.279 filename2: (groupid=0, jobs=1): err= 0: pid=2705111: Wed Nov 27 07:35:54 2024 00:40:45.279 read: IOPS=668, BW=2673KiB/s (2737kB/s)(26.1MiB/10008msec) 00:40:45.279 slat (usec): min=5, max=109, avg=23.54, stdev=19.16 00:40:45.279 clat (usec): min=12270, max=32070, avg=23758.33, stdev=784.70 00:40:45.279 lat (usec): min=12280, max=32092, avg=23781.87, stdev=782.15 00:40:45.279 clat percentiles (usec): 00:40:45.279 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:40:45.279 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:40:45.279 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:40:45.279 | 99.00th=[25035], 99.50th=[25297], 99.90th=[31065], 99.95th=[31851], 00:40:45.279 | 99.99th=[32113] 00:40:45.279 bw ( KiB/s): min= 2554, max= 2688, per=4.15%, avg=2673.58, stdev=41.14, samples=19 00:40:45.279 iops : min= 638, max= 672, avg=668.32, stdev=10.36, samples=19 00:40:45.279 lat (msec) : 20=0.63%, 50=99.37% 00:40:45.279 cpu : usr=98.97%, sys=0.75%, ctx=14, majf=0, minf=34 00:40:45.279 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:40:45.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.279 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.279 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:45.279 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:45.279 filename2: (groupid=0, jobs=1): err= 0: pid=2705112: Wed Nov 27 07:35:54 2024 00:40:45.279 read: IOPS=666, BW=2666KiB/s (2730kB/s)(26.0MiB/10006msec) 00:40:45.279 slat (usec): min=5, max=111, avg=21.40, stdev=18.82 00:40:45.279 clat (usec): min=6083, max=46871, avg=23850.99, stdev=3554.58 00:40:45.279 lat (usec): min=6088, max=46891, avg=23872.38, stdev=3554.82 00:40:45.279 clat percentiles (usec): 00:40:45.279 | 1.00th=[14484], 5.00th=[18482], 10.00th=[20579], 20.00th=[23200], 00:40:45.279 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:40:45.279 | 70.00th=[23987], 80.00th=[24511], 90.00th=[26608], 95.00th=[29754], 00:40:45.279 | 99.00th=[36963], 99.50th=[41157], 99.90th=[46924], 99.95th=[46924], 00:40:45.279 | 99.99th=[46924] 00:40:45.279 bw ( KiB/s): min= 2432, max= 2720, per=4.11%, avg=2648.11, stdev=65.65, samples=19 00:40:45.279 iops : min= 608, max= 680, avg=662.00, stdev=16.41, samples=19 00:40:45.279 lat (msec) : 10=0.39%, 20=8.32%, 50=91.29% 00:40:45.279 cpu : usr=98.68%, sys=0.95%, ctx=88, majf=0, minf=40 00:40:45.279 IO depths : 1=2.2%, 2=4.5%, 4=11.8%, 8=69.1%, 16=12.3%, 32=0.0%, >=64=0.0% 00:40:45.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.279 complete : 0=0.0%, 4=91.0%, 8=5.3%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.279 issued rwts: total=6668,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:45.279 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:45.279 filename2: (groupid=0, jobs=1): err= 0: pid=2705113: Wed Nov 27 07:35:54 2024 00:40:45.279 read: IOPS=676, BW=2705KiB/s (2770kB/s)(26.4MiB/10009msec) 00:40:45.279 slat (usec): min=5, max=109, avg=24.08, stdev=17.18 00:40:45.279 clat (usec): min=10554, max=39301, avg=23461.66, stdev=1934.04 00:40:45.279 lat (usec): min=10581, max=39321, avg=23485.74, stdev=1935.89 00:40:45.279 clat percentiles (usec): 00:40:45.279 | 1.00th=[15795], 5.00th=[20317], 10.00th=[23200], 20.00th=[23462], 00:40:45.279 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:40:45.279 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:40:45.279 | 99.00th=[27657], 99.50th=[31327], 99.90th=[34866], 99.95th=[36439], 00:40:45.279 | 99.99th=[39060] 00:40:45.279 bw ( KiB/s): min= 2560, max= 3120, per=4.18%, avg=2689.89, stdev=114.57, samples=19 00:40:45.279 iops : min= 640, max= 780, avg=672.42, stdev=28.65, samples=19 00:40:45.279 lat (msec) : 20=4.86%, 50=95.14% 00:40:45.279 cpu : usr=98.97%, sys=0.75%, ctx=13, majf=0, minf=31 00:40:45.279 IO depths : 1=5.5%, 2=11.2%, 4=23.6%, 8=52.6%, 16=7.1%, 32=0.0%, >=64=0.0% 00:40:45.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.279 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.279 issued rwts: total=6768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:45.279 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:45.279 filename2: (groupid=0, jobs=1): err= 0: pid=2705114: Wed Nov 27 07:35:54 2024 00:40:45.279 read: IOPS=668, BW=2674KiB/s (2739kB/s)(26.1MiB/10003msec) 00:40:45.279 slat (usec): min=5, max=109, avg=27.01, stdev=18.07 00:40:45.279 clat (usec): min=9061, max=33145, avg=23683.58, stdev=1238.39 00:40:45.279 lat (usec): min=9079, max=33152, avg=23710.59, stdev=1237.75 00:40:45.279 clat percentiles (usec): 00:40:45.279 | 1.00th=[20579], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:40:45.279 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:40:45.279 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:40:45.279 | 99.00th=[25035], 99.50th=[27919], 99.90th=[33162], 99.95th=[33162], 00:40:45.279 | 99.99th=[33162] 00:40:45.280 bw ( KiB/s): min= 2560, max= 2816, per=4.15%, avg=2674.53, stdev=72.59, samples=19 00:40:45.280 iops : min= 640, max= 704, avg=668.63, stdev=18.15, samples=19 00:40:45.280 lat (msec) : 10=0.21%, 20=0.78%, 50=99.01% 00:40:45.280 cpu : usr=98.93%, sys=0.76%, ctx=71, majf=0, minf=30 00:40:45.280 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:40:45.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.280 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.280 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:45.280 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:45.280 filename2: (groupid=0, jobs=1): err= 0: pid=2705115: Wed Nov 27 07:35:54 2024 00:40:45.280 read: IOPS=670, BW=2681KiB/s (2746kB/s)(26.2MiB/10004msec) 00:40:45.280 slat (usec): min=5, max=113, avg=23.14, stdev=20.13 00:40:45.280 clat (usec): min=4193, max=44028, avg=23674.48, stdev=4065.34 00:40:45.280 lat (usec): min=4198, max=44049, avg=23697.62, stdev=4065.48 00:40:45.280 clat percentiles (usec): 00:40:45.280 | 1.00th=[ 9241], 5.00th=[16712], 10.00th=[21103], 20.00th=[23200], 00:40:45.280 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:40:45.280 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[29754], 00:40:45.280 | 99.00th=[39584], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:40:45.280 | 99.99th=[43779] 00:40:45.280 bw ( KiB/s): min= 2404, max= 2858, per=4.14%, avg=2666.00, stdev=111.94, samples=19 00:40:45.280 iops : min= 601, max= 714, avg=666.47, stdev=27.94, samples=19 00:40:45.280 lat (msec) : 10=1.10%, 20=6.83%, 50=92.07% 00:40:45.280 cpu : usr=98.52%, sys=1.12%, ctx=127, majf=0, minf=32 00:40:45.280 IO depths : 1=1.8%, 2=5.7%, 4=17.0%, 8=63.6%, 16=11.8%, 32=0.0%, >=64=0.0% 00:40:45.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.280 complete : 0=0.0%, 4=92.2%, 8=3.2%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.280 issued rwts: total=6706,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:45.280 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:45.280 filename2: (groupid=0, jobs=1): err= 0: pid=2705116: Wed Nov 27 07:35:54 2024 00:40:45.280 read: IOPS=671, BW=2686KiB/s (2750kB/s)(26.2MiB/10003msec) 00:40:45.280 slat (usec): min=5, max=127, avg=28.35, stdev=19.79 00:40:45.280 clat (usec): min=4222, max=43805, avg=23566.05, stdev=1975.15 00:40:45.280 lat (usec): min=4227, max=43825, avg=23594.39, stdev=1976.37 00:40:45.280 clat percentiles (usec): 00:40:45.280 | 1.00th=[15401], 5.00th=[22938], 10.00th=[23200], 20.00th=[23200], 00:40:45.280 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:40:45.280 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:40:45.280 | 99.00th=[28967], 99.50th=[31589], 99.90th=[43779], 99.95th=[43779], 00:40:45.280 | 99.99th=[43779] 00:40:45.280 bw ( KiB/s): min= 2436, max= 2784, per=4.15%, avg=2670.21, stdev=68.89, samples=19 00:40:45.280 iops : min= 609, max= 696, avg=667.53, stdev=17.22, samples=19 00:40:45.280 lat (msec) : 10=0.24%, 20=2.44%, 50=97.32% 00:40:45.280 cpu : usr=99.01%, sys=0.71%, ctx=40, majf=0, minf=33 00:40:45.280 IO depths : 1=4.8%, 2=10.0%, 4=21.5%, 8=55.4%, 16=8.3%, 32=0.0%, >=64=0.0% 00:40:45.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.280 complete : 0=0.0%, 4=93.4%, 8=1.5%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.280 issued rwts: total=6716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:45.280 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:45.280 filename2: (groupid=0, jobs=1): err= 0: pid=2705117: Wed Nov 27 07:35:54 2024 00:40:45.280 read: IOPS=674, BW=2696KiB/s (2761kB/s)(26.4MiB/10016msec) 00:40:45.280 slat (usec): min=5, max=161, avg=14.33, stdev=13.55 00:40:45.280 clat (usec): min=6636, max=30315, avg=23622.52, stdev=1843.35 00:40:45.280 lat (usec): min=6652, max=30322, avg=23636.85, stdev=1843.02 00:40:45.280 clat percentiles (usec): 00:40:45.280 | 1.00th=[11207], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:40:45.280 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:40:45.280 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24511], 00:40:45.280 | 99.00th=[25035], 99.50th=[25035], 99.90th=[28181], 99.95th=[30278], 00:40:45.280 | 99.99th=[30278] 00:40:45.280 bw ( KiB/s): min= 2560, max= 3072, per=4.19%, avg=2694.40, stdev=97.17, samples=20 00:40:45.280 iops : min= 640, max= 768, avg=673.60, stdev=24.29, samples=20 00:40:45.280 lat (msec) : 10=0.71%, 20=1.60%, 50=97.69% 00:40:45.280 cpu : usr=98.73%, sys=0.94%, ctx=135, majf=0, minf=40 00:40:45.280 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:40:45.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.280 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.280 issued rwts: total=6752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:45.280 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:45.280 filename2: (groupid=0, jobs=1): err= 0: pid=2705118: Wed Nov 27 07:35:54 2024 00:40:45.280 read: IOPS=668, BW=2674KiB/s (2738kB/s)(26.1MiB/10005msec) 00:40:45.280 slat (usec): min=5, max=115, avg=19.12, stdev=16.75 00:40:45.280 clat (usec): min=4123, max=44371, avg=23841.48, stdev=2498.84 00:40:45.280 lat (usec): min=4130, max=44390, avg=23860.60, stdev=2499.65 00:40:45.280 clat percentiles (usec): 00:40:45.280 | 1.00th=[14091], 5.00th=[20579], 10.00th=[23200], 20.00th=[23462], 00:40:45.280 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:40:45.280 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[26084], 00:40:45.280 | 99.00th=[33424], 99.50th=[36439], 99.90th=[41157], 99.95th=[41157], 00:40:45.280 | 99.99th=[44303] 00:40:45.280 bw ( KiB/s): min= 2560, max= 2736, per=4.13%, avg=2657.37, stdev=45.92, samples=19 00:40:45.280 iops : min= 640, max= 684, avg=664.32, stdev=11.49, samples=19 00:40:45.280 lat (msec) : 10=0.33%, 20=3.48%, 50=96.19% 00:40:45.280 cpu : usr=99.04%, sys=0.68%, ctx=13, majf=0, minf=41 00:40:45.280 IO depths : 1=0.6%, 2=1.5%, 4=4.8%, 8=76.9%, 16=16.1%, 32=0.0%, >=64=0.0% 00:40:45.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.280 complete : 0=0.0%, 4=90.1%, 8=8.2%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.280 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:45.280 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:45.280 00:40:45.280 Run status group 0 (all jobs): 00:40:45.280 READ: bw=62.9MiB/s (65.9MB/s), 2630KiB/s-2759KiB/s (2693kB/s-2826kB/s), io=630MiB (660MB), run=10003-10019msec 00:40:45.280 07:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:40:45.280 07:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:40:45.280 07:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:45.280 07:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:45.280 07:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:40:45.280 07:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:45.280 07:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:45.280 07:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:45.280 07:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:45.280 07:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:45.280 07:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:45.280 07:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:45.280 07:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:45.280 07:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:45.280 07:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:40:45.280 07:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:40:45.280 07:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:45.280 07:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:45.280 07:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:45.280 07:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:45.280 07:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:40:45.280 07:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:45.280 07:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:45.280 07:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:45.280 07:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:45.280 07:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:40:45.280 07:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:40:45.280 07:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:40:45.280 07:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:45.280 07:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:45.280 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:45.280 07:35:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:40:45.280 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:45.280 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:45.280 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:45.280 07:35:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:40:45.280 07:35:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:40:45.280 07:35:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:40:45.280 07:35:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:40:45.280 07:35:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:40:45.280 07:35:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:40:45.280 07:35:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:40:45.280 07:35:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:40:45.280 07:35:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:45.280 07:35:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:40:45.280 07:35:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:40:45.280 07:35:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:45.280 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:45.281 bdev_null0 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:45.281 [2024-11-27 07:35:55.064310] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:45.281 bdev_null1 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:45.281 { 00:40:45.281 "params": { 00:40:45.281 "name": "Nvme$subsystem", 00:40:45.281 "trtype": "$TEST_TRANSPORT", 00:40:45.281 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:45.281 "adrfam": "ipv4", 00:40:45.281 "trsvcid": "$NVMF_PORT", 00:40:45.281 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:45.281 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:45.281 "hdgst": ${hdgst:-false}, 00:40:45.281 "ddgst": ${ddgst:-false} 00:40:45.281 }, 00:40:45.281 "method": "bdev_nvme_attach_controller" 00:40:45.281 } 00:40:45.281 EOF 00:40:45.281 )") 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:45.281 { 00:40:45.281 "params": { 00:40:45.281 "name": "Nvme$subsystem", 00:40:45.281 "trtype": "$TEST_TRANSPORT", 00:40:45.281 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:45.281 "adrfam": "ipv4", 00:40:45.281 "trsvcid": "$NVMF_PORT", 00:40:45.281 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:45.281 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:45.281 "hdgst": ${hdgst:-false}, 00:40:45.281 "ddgst": ${ddgst:-false} 00:40:45.281 }, 00:40:45.281 "method": "bdev_nvme_attach_controller" 00:40:45.281 } 00:40:45.281 EOF 00:40:45.281 )") 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:45.281 "params": { 00:40:45.281 "name": "Nvme0", 00:40:45.281 "trtype": "tcp", 00:40:45.281 "traddr": "10.0.0.2", 00:40:45.281 "adrfam": "ipv4", 00:40:45.281 "trsvcid": "4420", 00:40:45.281 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:45.281 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:45.281 "hdgst": false, 00:40:45.281 "ddgst": false 00:40:45.281 }, 00:40:45.281 "method": "bdev_nvme_attach_controller" 00:40:45.281 },{ 00:40:45.281 "params": { 00:40:45.281 "name": "Nvme1", 00:40:45.281 "trtype": "tcp", 00:40:45.281 "traddr": "10.0.0.2", 00:40:45.281 "adrfam": "ipv4", 00:40:45.281 "trsvcid": "4420", 00:40:45.281 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:45.281 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:45.281 "hdgst": false, 00:40:45.281 "ddgst": false 00:40:45.281 }, 00:40:45.281 "method": "bdev_nvme_attach_controller" 00:40:45.281 }' 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:45.281 07:35:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:45.281 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:40:45.281 ... 00:40:45.281 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:40:45.281 ... 00:40:45.281 fio-3.35 00:40:45.281 Starting 4 threads 00:40:50.564 00:40:50.564 filename0: (groupid=0, jobs=1): err= 0: pid=2707420: Wed Nov 27 07:36:01 2024 00:40:50.564 read: IOPS=2935, BW=22.9MiB/s (24.0MB/s)(115MiB/5002msec) 00:40:50.564 slat (nsec): min=5478, max=73225, avg=6181.01, stdev=2434.93 00:40:50.564 clat (usec): min=1015, max=5091, avg=2708.97, stdev=323.22 00:40:50.564 lat (usec): min=1039, max=5099, avg=2715.15, stdev=323.00 00:40:50.564 clat percentiles (usec): 00:40:50.564 | 1.00th=[ 1909], 5.00th=[ 2278], 10.00th=[ 2409], 20.00th=[ 2540], 00:40:50.564 | 30.00th=[ 2606], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2704], 00:40:50.564 | 70.00th=[ 2737], 80.00th=[ 2868], 90.00th=[ 2966], 95.00th=[ 3195], 00:40:50.564 | 99.00th=[ 3982], 99.50th=[ 4178], 99.90th=[ 4555], 99.95th=[ 4621], 00:40:50.564 | 99.99th=[ 5080] 00:40:50.564 bw ( KiB/s): min=23152, max=23808, per=24.95%, avg=23483.20, stdev=184.76, samples=10 00:40:50.564 iops : min= 2894, max= 2976, avg=2935.40, stdev=23.09, samples=10 00:40:50.564 lat (msec) : 2=1.47%, 4=97.60%, 10=0.93% 00:40:50.564 cpu : usr=96.38%, sys=3.38%, ctx=6, majf=0, minf=24 00:40:50.564 IO depths : 1=0.1%, 2=0.3%, 4=70.7%, 8=28.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:50.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:50.564 complete : 0=0.0%, 4=93.5%, 8=6.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:50.564 issued rwts: total=14683,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:50.564 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:50.564 filename0: (groupid=0, jobs=1): err= 0: pid=2707421: Wed Nov 27 07:36:01 2024 00:40:50.564 read: IOPS=3094, BW=24.2MiB/s (25.3MB/s)(121MiB/5001msec) 00:40:50.564 slat (nsec): min=5481, max=59264, avg=5882.48, stdev=1392.90 00:40:50.564 clat (usec): min=1024, max=4508, avg=2570.15, stdev=381.12 00:40:50.564 lat (usec): min=1030, max=4514, avg=2576.03, stdev=381.19 00:40:50.564 clat percentiles (usec): 00:40:50.564 | 1.00th=[ 1631], 5.00th=[ 1991], 10.00th=[ 2114], 20.00th=[ 2278], 00:40:50.564 | 30.00th=[ 2409], 40.00th=[ 2507], 50.00th=[ 2638], 60.00th=[ 2671], 00:40:50.564 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2966], 95.00th=[ 3294], 00:40:50.564 | 99.00th=[ 3654], 99.50th=[ 3851], 99.90th=[ 4293], 99.95th=[ 4424], 00:40:50.564 | 99.99th=[ 4490] 00:40:50.564 bw ( KiB/s): min=24528, max=24960, per=26.30%, avg=24756.90, stdev=151.97, samples=10 00:40:50.564 iops : min= 3066, max= 3120, avg=3094.60, stdev=19.00, samples=10 00:40:50.564 lat (msec) : 2=5.15%, 4=94.48%, 10=0.37% 00:40:50.564 cpu : usr=96.92%, sys=2.82%, ctx=8, majf=0, minf=46 00:40:50.564 IO depths : 1=0.1%, 2=0.4%, 4=70.4%, 8=29.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:50.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:50.564 complete : 0=0.0%, 4=93.9%, 8=6.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:50.564 issued rwts: total=15475,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:50.564 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:50.564 filename1: (groupid=0, jobs=1): err= 0: pid=2707422: Wed Nov 27 07:36:01 2024 00:40:50.564 read: IOPS=2926, BW=22.9MiB/s (24.0MB/s)(114MiB/5001msec) 00:40:50.564 slat (nsec): min=5472, max=79907, avg=6162.16, stdev=2513.20 00:40:50.564 clat (usec): min=1477, max=4754, avg=2717.85, stdev=329.18 00:40:50.564 lat (usec): min=1483, max=4760, avg=2724.02, stdev=329.30 00:40:50.564 clat percentiles (usec): 00:40:50.564 | 1.00th=[ 1991], 5.00th=[ 2245], 10.00th=[ 2409], 20.00th=[ 2540], 00:40:50.564 | 30.00th=[ 2606], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2704], 00:40:50.564 | 70.00th=[ 2737], 80.00th=[ 2900], 90.00th=[ 2966], 95.00th=[ 3326], 00:40:50.564 | 99.00th=[ 4015], 99.50th=[ 4146], 99.90th=[ 4490], 99.95th=[ 4555], 00:40:50.564 | 99.99th=[ 4752] 00:40:50.564 bw ( KiB/s): min=22912, max=23760, per=24.87%, avg=23411.56, stdev=250.03, samples=9 00:40:50.564 iops : min= 2864, max= 2970, avg=2926.44, stdev=31.25, samples=9 00:40:50.564 lat (msec) : 2=1.09%, 4=97.89%, 10=1.02% 00:40:50.564 cpu : usr=96.50%, sys=3.26%, ctx=4, majf=0, minf=39 00:40:50.564 IO depths : 1=0.1%, 2=0.3%, 4=71.3%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:50.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:50.564 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:50.564 issued rwts: total=14633,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:50.564 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:50.564 filename1: (groupid=0, jobs=1): err= 0: pid=2707424: Wed Nov 27 07:36:01 2024 00:40:50.564 read: IOPS=2811, BW=22.0MiB/s (23.0MB/s)(110MiB/5002msec) 00:40:50.564 slat (nsec): min=5475, max=88216, avg=6040.66, stdev=2164.92 00:40:50.564 clat (usec): min=1259, max=6154, avg=2828.51, stdev=389.36 00:40:50.564 lat (usec): min=1264, max=6181, avg=2834.55, stdev=389.39 00:40:50.564 clat percentiles (usec): 00:40:50.564 | 1.00th=[ 2147], 5.00th=[ 2409], 10.00th=[ 2507], 20.00th=[ 2606], 00:40:50.564 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2737], 00:40:50.564 | 70.00th=[ 2900], 80.00th=[ 2966], 90.00th=[ 3326], 95.00th=[ 3687], 00:40:50.565 | 99.00th=[ 4228], 99.50th=[ 4359], 99.90th=[ 4948], 99.95th=[ 5669], 00:40:50.565 | 99.99th=[ 5735] 00:40:50.565 bw ( KiB/s): min=21984, max=23056, per=23.89%, avg=22492.40, stdev=344.88, samples=10 00:40:50.565 iops : min= 2748, max= 2882, avg=2811.50, stdev=43.15, samples=10 00:40:50.565 lat (msec) : 2=0.53%, 4=96.94%, 10=2.54% 00:40:50.565 cpu : usr=93.70%, sys=4.78%, ctx=214, majf=0, minf=62 00:40:50.565 IO depths : 1=0.1%, 2=0.4%, 4=72.2%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:50.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:50.565 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:50.565 issued rwts: total=14063,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:50.565 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:50.565 00:40:50.565 Run status group 0 (all jobs): 00:40:50.565 READ: bw=91.9MiB/s (96.4MB/s), 22.0MiB/s-24.2MiB/s (23.0MB/s-25.3MB/s), io=460MiB (482MB), run=5001-5002msec 00:40:50.565 07:36:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:40:50.565 07:36:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:40:50.565 07:36:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:50.565 07:36:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:50.565 07:36:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:40:50.565 07:36:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:50.565 07:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:50.565 07:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:50.565 07:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:50.565 07:36:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:50.565 07:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:50.565 07:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:50.565 07:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:50.565 07:36:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:50.565 07:36:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:40:50.565 07:36:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:40:50.565 07:36:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:50.565 07:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:50.565 07:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:50.565 07:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:50.565 07:36:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:40:50.565 07:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:50.565 07:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:50.565 07:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:50.565 00:40:50.565 real 0m24.574s 00:40:50.565 user 5m21.128s 00:40:50.565 sys 0m4.824s 00:40:50.565 07:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:50.565 07:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:50.565 ************************************ 00:40:50.565 END TEST fio_dif_rand_params 00:40:50.565 ************************************ 00:40:50.565 07:36:01 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:40:50.565 07:36:01 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:50.565 07:36:01 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:50.565 07:36:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:50.565 ************************************ 00:40:50.565 START TEST fio_dif_digest 00:40:50.565 ************************************ 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:50.565 bdev_null0 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:50.565 [2024-11-27 07:36:01.622037] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:50.565 { 00:40:50.565 "params": { 00:40:50.565 "name": "Nvme$subsystem", 00:40:50.565 "trtype": "$TEST_TRANSPORT", 00:40:50.565 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:50.565 "adrfam": "ipv4", 00:40:50.565 "trsvcid": "$NVMF_PORT", 00:40:50.565 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:50.565 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:50.565 "hdgst": ${hdgst:-false}, 00:40:50.565 "ddgst": ${ddgst:-false} 00:40:50.565 }, 00:40:50.565 "method": "bdev_nvme_attach_controller" 00:40:50.565 } 00:40:50.565 EOF 00:40:50.565 )") 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:50.565 "params": { 00:40:50.565 "name": "Nvme0", 00:40:50.565 "trtype": "tcp", 00:40:50.565 "traddr": "10.0.0.2", 00:40:50.565 "adrfam": "ipv4", 00:40:50.565 "trsvcid": "4420", 00:40:50.565 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:50.565 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:50.565 "hdgst": true, 00:40:50.565 "ddgst": true 00:40:50.565 }, 00:40:50.565 "method": "bdev_nvme_attach_controller" 00:40:50.565 }' 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:50.565 07:36:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:51.141 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:40:51.141 ... 00:40:51.141 fio-3.35 00:40:51.141 Starting 3 threads 00:41:03.531 00:41:03.531 filename0: (groupid=0, jobs=1): err= 0: pid=2708930: Wed Nov 27 07:36:12 2024 00:41:03.531 read: IOPS=299, BW=37.4MiB/s (39.3MB/s)(376MiB/10047msec) 00:41:03.531 slat (nsec): min=6032, max=85911, avg=9415.70, stdev=2185.40 00:41:03.531 clat (usec): min=5499, max=52907, avg=9990.70, stdev=2200.42 00:41:03.531 lat (usec): min=5510, max=52916, avg=10000.12, stdev=2200.42 00:41:03.531 clat percentiles (usec): 00:41:03.531 | 1.00th=[ 6325], 5.00th=[ 7177], 10.00th=[ 7701], 20.00th=[ 8848], 00:41:03.531 | 30.00th=[ 9503], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10421], 00:41:03.531 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11469], 95.00th=[11863], 00:41:03.531 | 99.00th=[12518], 99.50th=[12911], 99.90th=[50594], 99.95th=[52691], 00:41:03.531 | 99.99th=[52691] 00:41:03.531 bw ( KiB/s): min=35584, max=42240, per=34.55%, avg=38489.60, stdev=1580.87, samples=20 00:41:03.531 iops : min= 278, max= 330, avg=300.70, stdev=12.35, samples=20 00:41:03.531 lat (msec) : 10=44.43%, 20=55.40%, 50=0.03%, 100=0.13% 00:41:03.531 cpu : usr=93.78%, sys=5.93%, ctx=21, majf=0, minf=144 00:41:03.531 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:03.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:03.531 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:03.531 issued rwts: total=3009,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:03.531 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:03.531 filename0: (groupid=0, jobs=1): err= 0: pid=2708931: Wed Nov 27 07:36:12 2024 00:41:03.531 read: IOPS=295, BW=37.0MiB/s (38.8MB/s)(372MiB/10046msec) 00:41:03.531 slat (nsec): min=5920, max=37799, avg=7713.41, stdev=1777.93 00:41:03.531 clat (usec): min=5219, max=49804, avg=10117.16, stdev=1740.26 00:41:03.531 lat (usec): min=5225, max=49810, avg=10124.88, stdev=1740.34 00:41:03.531 clat percentiles (usec): 00:41:03.531 | 1.00th=[ 6390], 5.00th=[ 7373], 10.00th=[ 7898], 20.00th=[ 8979], 00:41:03.531 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10290], 60.00th=[10552], 00:41:03.531 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11731], 95.00th=[12125], 00:41:03.531 | 99.00th=[12911], 99.50th=[13173], 99.90th=[14746], 99.95th=[46924], 00:41:03.531 | 99.99th=[49546] 00:41:03.531 bw ( KiB/s): min=34816, max=40448, per=34.13%, avg=38016.00, stdev=1532.63, samples=20 00:41:03.531 iops : min= 272, max= 316, avg=297.00, stdev=11.97, samples=20 00:41:03.531 lat (msec) : 10=40.34%, 20=59.59%, 50=0.07% 00:41:03.531 cpu : usr=94.09%, sys=5.63%, ctx=22, majf=0, minf=222 00:41:03.531 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:03.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:03.531 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:03.531 issued rwts: total=2972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:03.531 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:03.531 filename0: (groupid=0, jobs=1): err= 0: pid=2708932: Wed Nov 27 07:36:12 2024 00:41:03.531 read: IOPS=275, BW=34.4MiB/s (36.1MB/s)(345MiB/10045msec) 00:41:03.531 slat (nsec): min=5862, max=54046, avg=8177.22, stdev=2196.20 00:41:03.531 clat (usec): min=5803, max=91435, avg=10881.59, stdev=8043.96 00:41:03.531 lat (usec): min=5810, max=91444, avg=10889.77, stdev=8043.96 00:41:03.531 clat percentiles (usec): 00:41:03.531 | 1.00th=[ 7439], 5.00th=[ 8094], 10.00th=[ 8455], 20.00th=[ 8717], 00:41:03.531 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9372], 60.00th=[ 9634], 00:41:03.531 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10552], 95.00th=[11207], 00:41:03.531 | 99.00th=[51119], 99.50th=[51643], 99.90th=[90702], 99.95th=[90702], 00:41:03.531 | 99.99th=[91751] 00:41:03.531 bw ( KiB/s): min=29440, max=41472, per=31.72%, avg=35340.80, stdev=2980.21, samples=20 00:41:03.531 iops : min= 230, max= 324, avg=276.10, stdev=23.28, samples=20 00:41:03.531 lat (msec) : 10=74.85%, 20=21.64%, 50=1.01%, 100=2.50% 00:41:03.531 cpu : usr=93.88%, sys=5.85%, ctx=27, majf=0, minf=113 00:41:03.531 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:03.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:03.531 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:03.531 issued rwts: total=2763,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:03.532 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:03.532 00:41:03.532 Run status group 0 (all jobs): 00:41:03.532 READ: bw=109MiB/s (114MB/s), 34.4MiB/s-37.4MiB/s (36.1MB/s-39.3MB/s), io=1093MiB (1146MB), run=10045-10047msec 00:41:03.532 07:36:12 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:41:03.532 07:36:12 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:41:03.532 07:36:12 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:41:03.532 07:36:12 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:03.532 07:36:12 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:41:03.532 07:36:12 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:03.532 07:36:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:03.532 07:36:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:03.532 07:36:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:03.532 07:36:12 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:03.532 07:36:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:03.532 07:36:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:03.532 07:36:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:03.532 00:41:03.532 real 0m11.279s 00:41:03.532 user 0m44.993s 00:41:03.532 sys 0m2.050s 00:41:03.532 07:36:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:03.532 07:36:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:03.532 ************************************ 00:41:03.532 END TEST fio_dif_digest 00:41:03.532 ************************************ 00:41:03.532 07:36:12 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:41:03.532 07:36:12 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:41:03.532 07:36:12 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:03.532 07:36:12 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:41:03.532 07:36:12 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:03.532 07:36:12 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:41:03.532 07:36:12 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:03.532 07:36:12 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:03.532 rmmod nvme_tcp 00:41:03.532 rmmod nvme_fabrics 00:41:03.532 rmmod nvme_keyring 00:41:03.532 07:36:12 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:03.532 07:36:12 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:41:03.532 07:36:12 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:41:03.532 07:36:12 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2698522 ']' 00:41:03.532 07:36:12 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2698522 00:41:03.532 07:36:12 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 2698522 ']' 00:41:03.532 07:36:12 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 2698522 00:41:03.532 07:36:12 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:41:03.532 07:36:12 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:03.532 07:36:12 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2698522 00:41:03.532 07:36:13 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:03.532 07:36:13 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:03.532 07:36:13 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2698522' 00:41:03.532 killing process with pid 2698522 00:41:03.532 07:36:13 nvmf_dif -- common/autotest_common.sh@973 -- # kill 2698522 00:41:03.532 07:36:13 nvmf_dif -- common/autotest_common.sh@978 -- # wait 2698522 00:41:03.532 07:36:13 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:41:03.532 07:36:13 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:05.446 Waiting for block devices as requested 00:41:05.446 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:05.446 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:05.446 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:05.706 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:05.706 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:05.706 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:05.966 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:05.966 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:05.966 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:41:06.226 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:06.226 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:06.486 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:06.486 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:06.486 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:06.486 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:06.746 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:06.746 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:07.007 07:36:18 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:07.007 07:36:18 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:07.007 07:36:18 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:41:07.007 07:36:18 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:41:07.007 07:36:18 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:07.007 07:36:18 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:41:07.007 07:36:18 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:07.007 07:36:18 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:07.007 07:36:18 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:07.007 07:36:18 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:07.007 07:36:18 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:09.550 07:36:20 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:09.550 00:41:09.550 real 1m18.295s 00:41:09.550 user 8m5.486s 00:41:09.550 sys 0m22.350s 00:41:09.550 07:36:20 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:09.550 07:36:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:09.550 ************************************ 00:41:09.550 END TEST nvmf_dif 00:41:09.550 ************************************ 00:41:09.550 07:36:20 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:09.550 07:36:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:09.550 07:36:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:09.550 07:36:20 -- common/autotest_common.sh@10 -- # set +x 00:41:09.550 ************************************ 00:41:09.550 START TEST nvmf_abort_qd_sizes 00:41:09.550 ************************************ 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:09.550 * Looking for test storage... 00:41:09.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:09.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:09.550 --rc genhtml_branch_coverage=1 00:41:09.550 --rc genhtml_function_coverage=1 00:41:09.550 --rc genhtml_legend=1 00:41:09.550 --rc geninfo_all_blocks=1 00:41:09.550 --rc geninfo_unexecuted_blocks=1 00:41:09.550 00:41:09.550 ' 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:09.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:09.550 --rc genhtml_branch_coverage=1 00:41:09.550 --rc genhtml_function_coverage=1 00:41:09.550 --rc genhtml_legend=1 00:41:09.550 --rc geninfo_all_blocks=1 00:41:09.550 --rc geninfo_unexecuted_blocks=1 00:41:09.550 00:41:09.550 ' 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:09.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:09.550 --rc genhtml_branch_coverage=1 00:41:09.550 --rc genhtml_function_coverage=1 00:41:09.550 --rc genhtml_legend=1 00:41:09.550 --rc geninfo_all_blocks=1 00:41:09.550 --rc geninfo_unexecuted_blocks=1 00:41:09.550 00:41:09.550 ' 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:09.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:09.550 --rc genhtml_branch_coverage=1 00:41:09.550 --rc genhtml_function_coverage=1 00:41:09.550 --rc genhtml_legend=1 00:41:09.550 --rc geninfo_all_blocks=1 00:41:09.550 --rc geninfo_unexecuted_blocks=1 00:41:09.550 00:41:09.550 ' 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:09.550 07:36:20 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:09.551 07:36:20 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:09.551 07:36:20 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:41:09.551 07:36:20 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:09.551 07:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:41:09.551 07:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:09.551 07:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:09.551 07:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:09.551 07:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:09.551 07:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:09.551 07:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:09.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:09.551 07:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:09.551 07:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:09.551 07:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:09.551 07:36:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:41:09.551 07:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:09.551 07:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:09.551 07:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:09.551 07:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:09.551 07:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:09.551 07:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:09.551 07:36:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:09.551 07:36:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:09.551 07:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:09.551 07:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:09.551 07:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:41:09.551 07:36:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:17.692 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:17.692 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:41:17.692 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:17.692 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:17.692 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:17.692 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:17.692 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:17.692 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:41:17.692 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:17.692 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:41:17.692 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:41:17.692 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:41:17.692 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:41:17.692 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:41:17.692 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:41:17.692 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:17.692 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:17.692 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:17.692 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:41:17.693 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:41:17.693 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:41:17.693 Found net devices under 0000:4b:00.0: cvl_0_0 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:41:17.693 Found net devices under 0000:4b:00.1: cvl_0_1 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:17.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:17.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:41:17.693 00:41:17.693 --- 10.0.0.2 ping statistics --- 00:41:17.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:17.693 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:17.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:17.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:41:17.693 00:41:17.693 --- 10.0.0.1 ping statistics --- 00:41:17.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:17.693 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:41:17.693 07:36:27 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:20.240 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:41:20.240 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:41:20.240 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:41:20.240 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:41:20.240 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:41:20.240 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:41:20.240 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:41:20.240 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:41:20.240 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:41:20.240 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:41:20.240 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:41:20.240 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:41:20.240 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:41:20.240 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:41:20.240 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:41:20.240 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:41:20.240 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:41:20.811 07:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:20.811 07:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:20.811 07:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:20.811 07:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:20.811 07:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:20.811 07:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:20.811 07:36:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:41:20.811 07:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:20.812 07:36:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:20.812 07:36:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:20.812 07:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2718827 00:41:20.812 07:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2718827 00:41:20.812 07:36:31 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:41:20.812 07:36:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 2718827 ']' 00:41:20.812 07:36:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:20.812 07:36:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:20.812 07:36:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:20.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:20.812 07:36:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:20.812 07:36:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:20.812 [2024-11-27 07:36:31.854714] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:41:20.812 [2024-11-27 07:36:31.854774] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:20.812 [2024-11-27 07:36:31.955097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:20.812 [2024-11-27 07:36:32.008912] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:20.812 [2024-11-27 07:36:32.008968] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:20.812 [2024-11-27 07:36:32.008977] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:20.812 [2024-11-27 07:36:32.008984] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:20.812 [2024-11-27 07:36:32.008990] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:20.812 [2024-11-27 07:36:32.011469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:20.812 [2024-11-27 07:36:32.011628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:20.812 [2024-11-27 07:36:32.011794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:20.812 [2024-11-27 07:36:32.011794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:21.754 07:36:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:21.754 07:36:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:41:21.754 07:36:32 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:21.754 07:36:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:21.754 07:36:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:21.754 07:36:32 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:21.754 07:36:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:41:21.754 07:36:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:41:21.754 07:36:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:41:21.754 07:36:32 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:41:21.754 07:36:32 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:41:21.754 07:36:32 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:41:21.754 07:36:32 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:41:21.754 07:36:32 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:41:21.754 07:36:32 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:41:21.755 07:36:32 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:41:21.755 07:36:32 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:41:21.755 07:36:32 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:41:21.755 07:36:32 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:41:21.755 07:36:32 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:41:21.755 07:36:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:41:21.755 07:36:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:41:21.755 07:36:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:41:21.755 07:36:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:21.755 07:36:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:21.755 07:36:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:21.755 ************************************ 00:41:21.755 START TEST spdk_target_abort 00:41:21.755 ************************************ 00:41:21.755 07:36:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:41:21.755 07:36:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:41:21.755 07:36:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:41:21.755 07:36:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.755 07:36:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:22.016 spdk_targetn1 00:41:22.016 07:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:22.016 07:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:22.016 07:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:22.016 07:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:22.016 [2024-11-27 07:36:33.088118] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:22.016 07:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:22.016 07:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:41:22.016 07:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:22.016 07:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:22.016 07:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:22.016 07:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:41:22.016 07:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:22.016 07:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:22.016 07:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:22.016 07:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:41:22.016 07:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:22.016 07:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:22.016 [2024-11-27 07:36:33.144481] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:22.016 07:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:22.016 07:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:41:22.016 07:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:41:22.016 07:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:41:22.016 07:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:41:22.016 07:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:41:22.016 07:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:41:22.016 07:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:41:22.016 07:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:41:22.016 07:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:41:22.016 07:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:22.016 07:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:41:22.016 07:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:22.016 07:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:41:22.016 07:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:22.016 07:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:41:22.016 07:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:22.016 07:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:22.016 07:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:22.016 07:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:22.016 07:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:22.016 07:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:22.276 [2024-11-27 07:36:33.420426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:56 len:8 PRP1 0x200004abe000 PRP2 0x0 00:41:22.277 [2024-11-27 07:36:33.420460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:41:22.277 [2024-11-27 07:36:33.435571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:496 len:8 PRP1 0x200004abe000 PRP2 0x0 00:41:22.277 [2024-11-27 07:36:33.435595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:003f p:1 m:0 dnr:0 00:41:22.277 [2024-11-27 07:36:33.443635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:720 len:8 PRP1 0x200004abe000 PRP2 0x0 00:41:22.277 [2024-11-27 07:36:33.443657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:005b p:1 m:0 dnr:0 00:41:22.537 [2024-11-27 07:36:33.483719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1976 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:41:22.537 [2024-11-27 07:36:33.483742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00f9 p:1 m:0 dnr:0 00:41:22.537 [2024-11-27 07:36:33.515729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3000 len:8 PRP1 0x200004abe000 PRP2 0x0 00:41:22.537 [2024-11-27 07:36:33.515752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:41:22.537 [2024-11-27 07:36:33.515792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2992 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:41:22.537 [2024-11-27 07:36:33.515801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:41:22.537 [2024-11-27 07:36:33.539708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3752 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:41:22.537 [2024-11-27 07:36:33.539729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00d9 p:0 m:0 dnr:0 00:41:25.840 Initializing NVMe Controllers 00:41:25.840 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:25.840 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:25.840 Initialization complete. Launching workers. 00:41:25.840 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11446, failed: 7 00:41:25.840 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2264, failed to submit 9189 00:41:25.840 success 757, unsuccessful 1507, failed 0 00:41:25.840 07:36:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:25.840 07:36:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:25.840 [2024-11-27 07:36:36.595363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:216 len:8 PRP1 0x200004e50000 PRP2 0x0 00:41:25.840 [2024-11-27 07:36:36.595408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:002c p:1 m:0 dnr:0 00:41:25.840 [2024-11-27 07:36:36.611358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:168 nsid:1 lba:640 len:8 PRP1 0x200004e58000 PRP2 0x0 00:41:25.840 [2024-11-27 07:36:36.611383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:168 cdw0:0 sqhd:0057 p:1 m:0 dnr:0 00:41:25.840 [2024-11-27 07:36:36.649915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:1576 len:8 PRP1 0x200004e3e000 PRP2 0x0 00:41:25.840 [2024-11-27 07:36:36.649938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:00ce p:1 m:0 dnr:0 00:41:25.840 [2024-11-27 07:36:36.753258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:186 nsid:1 lba:3856 len:8 PRP1 0x200004e4e000 PRP2 0x0 00:41:25.840 [2024-11-27 07:36:36.753284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:186 cdw0:0 sqhd:00f1 p:0 m:0 dnr:0 00:41:29.140 Initializing NVMe Controllers 00:41:29.140 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:29.140 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:29.140 Initialization complete. Launching workers. 00:41:29.140 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8638, failed: 4 00:41:29.140 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1199, failed to submit 7443 00:41:29.140 success 378, unsuccessful 821, failed 0 00:41:29.140 07:36:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:29.140 07:36:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:29.140 [2024-11-27 07:36:39.875572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:154 nsid:1 lba:2864 len:8 PRP1 0x200004afc000 PRP2 0x0 00:41:29.140 [2024-11-27 07:36:39.875601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:154 cdw0:0 sqhd:002f p:1 m:0 dnr:0 00:41:31.682 [2024-11-27 07:36:42.604428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:168 nsid:1 lba:321616 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:41:31.682 [2024-11-27 07:36:42.604474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:168 cdw0:0 sqhd:00c9 p:0 m:0 dnr:0 00:41:31.682 [2024-11-27 07:36:42.842496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:165 nsid:1 lba:349568 len:8 PRP1 0x200004ada000 PRP2 0x0 00:41:31.682 [2024-11-27 07:36:42.842517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:165 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:41:31.682 Initializing NVMe Controllers 00:41:31.682 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:31.682 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:31.682 Initialization complete. Launching workers. 00:41:31.682 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43938, failed: 3 00:41:31.682 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2638, failed to submit 41303 00:41:31.682 success 574, unsuccessful 2064, failed 0 00:41:31.682 07:36:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:41:31.682 07:36:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.943 07:36:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:31.943 07:36:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.943 07:36:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:41:31.943 07:36:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.943 07:36:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:33.854 07:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.854 07:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2718827 00:41:33.854 07:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 2718827 ']' 00:41:33.854 07:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 2718827 00:41:33.854 07:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:41:33.854 07:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:33.854 07:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2718827 00:41:33.854 07:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:33.854 07:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:33.854 07:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2718827' 00:41:33.854 killing process with pid 2718827 00:41:33.854 07:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 2718827 00:41:33.854 07:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 2718827 00:41:33.854 00:41:33.854 real 0m12.146s 00:41:33.854 user 0m49.497s 00:41:33.854 sys 0m1.983s 00:41:33.854 07:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:33.854 07:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:33.854 ************************************ 00:41:33.854 END TEST spdk_target_abort 00:41:33.854 ************************************ 00:41:33.854 07:36:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:41:33.854 07:36:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:33.854 07:36:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:33.854 07:36:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:33.854 ************************************ 00:41:33.854 START TEST kernel_target_abort 00:41:33.854 ************************************ 00:41:33.854 07:36:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:41:33.854 07:36:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:41:33.854 07:36:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:41:33.854 07:36:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:33.854 07:36:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:33.854 07:36:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:33.854 07:36:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:33.854 07:36:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:33.854 07:36:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:33.855 07:36:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:33.855 07:36:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:33.855 07:36:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:33.855 07:36:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:41:33.855 07:36:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:41:33.855 07:36:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:41:33.855 07:36:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:33.855 07:36:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:33.855 07:36:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:41:33.855 07:36:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:41:33.855 07:36:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:41:33.855 07:36:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:41:33.855 07:36:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:41:33.855 07:36:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:37.152 Waiting for block devices as requested 00:41:37.411 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:37.411 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:37.411 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:37.411 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:37.671 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:37.671 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:37.671 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:37.931 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:37.931 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:41:38.197 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:38.197 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:38.197 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:38.458 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:38.458 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:38.458 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:38.718 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:38.718 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:38.977 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:41:38.977 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:41:38.977 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:41:38.977 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:41:38.977 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:41:38.977 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:41:38.977 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:41:38.977 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:41:38.977 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:41:38.977 No valid GPT data, bailing 00:41:38.977 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:41:38.977 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:41:38.977 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:41:38.977 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:41:38.977 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:41:38.977 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:38.977 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:39.237 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:41:39.237 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:41:39.237 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:41:39.237 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:41:39.237 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:41:39.237 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:41:39.237 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:41:39.237 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:41:39.237 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:41:39.237 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:41:39.237 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:41:39.237 00:41:39.237 Discovery Log Number of Records 2, Generation counter 2 00:41:39.237 =====Discovery Log Entry 0====== 00:41:39.237 trtype: tcp 00:41:39.237 adrfam: ipv4 00:41:39.237 subtype: current discovery subsystem 00:41:39.237 treq: not specified, sq flow control disable supported 00:41:39.237 portid: 1 00:41:39.237 trsvcid: 4420 00:41:39.237 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:41:39.237 traddr: 10.0.0.1 00:41:39.237 eflags: none 00:41:39.237 sectype: none 00:41:39.237 =====Discovery Log Entry 1====== 00:41:39.237 trtype: tcp 00:41:39.237 adrfam: ipv4 00:41:39.237 subtype: nvme subsystem 00:41:39.237 treq: not specified, sq flow control disable supported 00:41:39.237 portid: 1 00:41:39.237 trsvcid: 4420 00:41:39.237 subnqn: nqn.2016-06.io.spdk:testnqn 00:41:39.237 traddr: 10.0.0.1 00:41:39.237 eflags: none 00:41:39.237 sectype: none 00:41:39.237 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:41:39.237 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:41:39.237 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:41:39.237 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:41:39.237 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:41:39.237 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:41:39.237 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:41:39.237 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:41:39.237 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:41:39.237 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:39.237 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:41:39.237 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:39.237 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:41:39.237 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:39.237 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:41:39.237 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:39.237 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:41:39.237 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:39.237 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:39.237 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:39.237 07:36:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:42.535 Initializing NVMe Controllers 00:41:42.535 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:41:42.535 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:42.535 Initialization complete. Launching workers. 00:41:42.535 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67800, failed: 0 00:41:42.535 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67800, failed to submit 0 00:41:42.535 success 0, unsuccessful 67800, failed 0 00:41:42.535 07:36:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:42.535 07:36:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:45.860 Initializing NVMe Controllers 00:41:45.860 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:41:45.860 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:45.860 Initialization complete. Launching workers. 00:41:45.860 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 115793, failed: 0 00:41:45.860 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29194, failed to submit 86599 00:41:45.860 success 0, unsuccessful 29194, failed 0 00:41:45.860 07:36:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:45.861 07:36:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:49.162 Initializing NVMe Controllers 00:41:49.162 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:41:49.162 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:49.162 Initialization complete. Launching workers. 00:41:49.162 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 145619, failed: 0 00:41:49.162 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36442, failed to submit 109177 00:41:49.162 success 0, unsuccessful 36442, failed 0 00:41:49.162 07:36:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:41:49.162 07:36:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:41:49.162 07:36:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:41:49.162 07:36:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:49.162 07:36:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:49.162 07:36:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:41:49.162 07:36:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:49.162 07:36:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:41:49.162 07:36:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:41:49.162 07:36:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:52.463 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:41:52.463 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:41:52.463 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:41:52.463 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:41:52.463 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:41:52.463 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:41:52.463 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:41:52.463 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:41:52.463 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:41:52.463 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:41:52.463 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:41:52.463 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:41:52.463 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:41:52.463 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:41:52.463 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:41:52.463 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:41:53.848 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:41:54.110 00:41:54.110 real 0m20.302s 00:41:54.110 user 0m9.936s 00:41:54.110 sys 0m5.998s 00:41:54.110 07:37:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:54.110 07:37:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:54.110 ************************************ 00:41:54.110 END TEST kernel_target_abort 00:41:54.110 ************************************ 00:41:54.371 07:37:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:41:54.371 07:37:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:41:54.371 07:37:05 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:54.371 07:37:05 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:41:54.371 07:37:05 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:54.371 07:37:05 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:41:54.371 07:37:05 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:54.371 07:37:05 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:54.371 rmmod nvme_tcp 00:41:54.371 rmmod nvme_fabrics 00:41:54.371 rmmod nvme_keyring 00:41:54.371 07:37:05 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:54.371 07:37:05 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:41:54.371 07:37:05 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:41:54.371 07:37:05 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2718827 ']' 00:41:54.371 07:37:05 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2718827 00:41:54.371 07:37:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 2718827 ']' 00:41:54.371 07:37:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 2718827 00:41:54.371 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2718827) - No such process 00:41:54.371 07:37:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 2718827 is not found' 00:41:54.371 Process with pid 2718827 is not found 00:41:54.371 07:37:05 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:41:54.371 07:37:05 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:57.670 Waiting for block devices as requested 00:41:57.670 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:57.930 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:57.930 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:57.930 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:58.191 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:58.191 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:58.191 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:58.191 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:58.451 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:41:58.713 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:58.713 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:58.713 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:58.974 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:58.974 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:58.974 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:59.235 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:59.235 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:59.496 07:37:10 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:59.496 07:37:10 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:59.496 07:37:10 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:41:59.496 07:37:10 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:41:59.496 07:37:10 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:59.496 07:37:10 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:41:59.496 07:37:10 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:59.496 07:37:10 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:59.496 07:37:10 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:59.496 07:37:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:59.496 07:37:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:02.042 07:37:12 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:02.042 00:42:02.042 real 0m52.378s 00:42:02.042 user 1m4.810s 00:42:02.042 sys 0m19.137s 00:42:02.042 07:37:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:02.042 07:37:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:02.042 ************************************ 00:42:02.042 END TEST nvmf_abort_qd_sizes 00:42:02.042 ************************************ 00:42:02.042 07:37:12 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:02.042 07:37:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:02.042 07:37:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:02.042 07:37:12 -- common/autotest_common.sh@10 -- # set +x 00:42:02.042 ************************************ 00:42:02.042 START TEST keyring_file 00:42:02.042 ************************************ 00:42:02.042 07:37:12 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:02.042 * Looking for test storage... 00:42:02.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:02.042 07:37:12 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:02.042 07:37:12 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:42:02.042 07:37:12 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:02.042 07:37:12 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:02.042 07:37:12 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:02.042 07:37:12 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:02.042 07:37:12 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:02.042 07:37:12 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:42:02.042 07:37:12 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:42:02.042 07:37:12 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:42:02.042 07:37:12 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:42:02.042 07:37:12 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:42:02.042 07:37:12 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:42:02.042 07:37:12 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:42:02.042 07:37:12 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:02.042 07:37:12 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:42:02.042 07:37:12 keyring_file -- scripts/common.sh@345 -- # : 1 00:42:02.042 07:37:12 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:02.042 07:37:12 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:02.042 07:37:12 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:42:02.042 07:37:12 keyring_file -- scripts/common.sh@353 -- # local d=1 00:42:02.042 07:37:12 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:02.042 07:37:12 keyring_file -- scripts/common.sh@355 -- # echo 1 00:42:02.042 07:37:12 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:42:02.042 07:37:12 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:42:02.042 07:37:12 keyring_file -- scripts/common.sh@353 -- # local d=2 00:42:02.042 07:37:12 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:02.042 07:37:12 keyring_file -- scripts/common.sh@355 -- # echo 2 00:42:02.042 07:37:12 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:42:02.042 07:37:12 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:02.042 07:37:12 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:02.042 07:37:12 keyring_file -- scripts/common.sh@368 -- # return 0 00:42:02.042 07:37:12 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:02.042 07:37:12 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:02.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:02.042 --rc genhtml_branch_coverage=1 00:42:02.042 --rc genhtml_function_coverage=1 00:42:02.042 --rc genhtml_legend=1 00:42:02.042 --rc geninfo_all_blocks=1 00:42:02.042 --rc geninfo_unexecuted_blocks=1 00:42:02.042 00:42:02.042 ' 00:42:02.042 07:37:12 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:02.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:02.042 --rc genhtml_branch_coverage=1 00:42:02.042 --rc genhtml_function_coverage=1 00:42:02.042 --rc genhtml_legend=1 00:42:02.042 --rc geninfo_all_blocks=1 00:42:02.042 --rc geninfo_unexecuted_blocks=1 00:42:02.042 00:42:02.042 ' 00:42:02.042 07:37:12 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:02.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:02.042 --rc genhtml_branch_coverage=1 00:42:02.042 --rc genhtml_function_coverage=1 00:42:02.042 --rc genhtml_legend=1 00:42:02.042 --rc geninfo_all_blocks=1 00:42:02.042 --rc geninfo_unexecuted_blocks=1 00:42:02.042 00:42:02.042 ' 00:42:02.042 07:37:12 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:02.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:02.042 --rc genhtml_branch_coverage=1 00:42:02.042 --rc genhtml_function_coverage=1 00:42:02.042 --rc genhtml_legend=1 00:42:02.042 --rc geninfo_all_blocks=1 00:42:02.042 --rc geninfo_unexecuted_blocks=1 00:42:02.042 00:42:02.042 ' 00:42:02.042 07:37:12 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:02.042 07:37:12 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:02.042 07:37:12 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:42:02.042 07:37:12 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:02.042 07:37:12 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:02.042 07:37:12 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:02.042 07:37:12 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:02.042 07:37:12 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:02.042 07:37:12 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:02.042 07:37:12 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:02.042 07:37:12 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:02.042 07:37:12 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:02.042 07:37:12 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:02.042 07:37:13 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:02.042 07:37:13 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:02.042 07:37:13 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:02.042 07:37:13 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:02.042 07:37:13 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:02.042 07:37:13 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:02.042 07:37:13 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:02.042 07:37:13 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:42:02.042 07:37:13 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:02.042 07:37:13 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:02.042 07:37:13 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:02.042 07:37:13 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:02.042 07:37:13 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:02.042 07:37:13 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:02.042 07:37:13 keyring_file -- paths/export.sh@5 -- # export PATH 00:42:02.042 07:37:13 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:02.042 07:37:13 keyring_file -- nvmf/common.sh@51 -- # : 0 00:42:02.042 07:37:13 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:02.042 07:37:13 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:02.042 07:37:13 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:02.042 07:37:13 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:02.042 07:37:13 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:02.043 07:37:13 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:02.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:02.043 07:37:13 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:02.043 07:37:13 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:02.043 07:37:13 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:02.043 07:37:13 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:02.043 07:37:13 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:02.043 07:37:13 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:02.043 07:37:13 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:42:02.043 07:37:13 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:42:02.043 07:37:13 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:42:02.043 07:37:13 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:02.043 07:37:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:02.043 07:37:13 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:02.043 07:37:13 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:02.043 07:37:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:02.043 07:37:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:02.043 07:37:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.38UUSoD8BV 00:42:02.043 07:37:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:02.043 07:37:13 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:02.043 07:37:13 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:02.043 07:37:13 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:02.043 07:37:13 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:02.043 07:37:13 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:02.043 07:37:13 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:02.043 07:37:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.38UUSoD8BV 00:42:02.043 07:37:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.38UUSoD8BV 00:42:02.043 07:37:13 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.38UUSoD8BV 00:42:02.043 07:37:13 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:42:02.043 07:37:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:02.043 07:37:13 keyring_file -- keyring/common.sh@17 -- # name=key1 00:42:02.043 07:37:13 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:02.043 07:37:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:02.043 07:37:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:02.043 07:37:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.QHp7aOopGi 00:42:02.043 07:37:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:02.043 07:37:13 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:02.043 07:37:13 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:02.043 07:37:13 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:02.043 07:37:13 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:42:02.043 07:37:13 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:02.043 07:37:13 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:02.043 07:37:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.QHp7aOopGi 00:42:02.043 07:37:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.QHp7aOopGi 00:42:02.043 07:37:13 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.QHp7aOopGi 00:42:02.043 07:37:13 keyring_file -- keyring/file.sh@30 -- # tgtpid=2729011 00:42:02.043 07:37:13 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2729011 00:42:02.043 07:37:13 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:02.043 07:37:13 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2729011 ']' 00:42:02.043 07:37:13 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:02.043 07:37:13 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:02.043 07:37:13 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:02.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:02.043 07:37:13 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:02.043 07:37:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:02.043 [2024-11-27 07:37:13.200597] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:42:02.043 [2024-11-27 07:37:13.200679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2729011 ] 00:42:02.304 [2024-11-27 07:37:13.292537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:02.304 [2024-11-27 07:37:13.344829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:02.876 07:37:13 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:02.876 07:37:13 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:42:02.876 07:37:13 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:42:02.876 07:37:13 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.876 07:37:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:02.876 [2024-11-27 07:37:14.006092] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:02.876 null0 00:42:02.876 [2024-11-27 07:37:14.038141] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:02.876 [2024-11-27 07:37:14.038598] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:02.876 07:37:14 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.876 07:37:14 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:02.876 07:37:14 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:42:02.876 07:37:14 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:02.876 07:37:14 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:42:02.876 07:37:14 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:02.876 07:37:14 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:42:02.876 07:37:14 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:02.876 07:37:14 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:02.876 07:37:14 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.876 07:37:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:02.876 [2024-11-27 07:37:14.070211] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:42:02.876 request: 00:42:02.876 { 00:42:02.876 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:42:02.876 "secure_channel": false, 00:42:02.876 "listen_address": { 00:42:02.876 "trtype": "tcp", 00:42:02.876 "traddr": "127.0.0.1", 00:42:02.876 "trsvcid": "4420" 00:42:02.876 }, 00:42:02.876 "method": "nvmf_subsystem_add_listener", 00:42:02.876 "req_id": 1 00:42:02.876 } 00:42:02.876 Got JSON-RPC error response 00:42:02.876 response: 00:42:02.876 { 00:42:02.876 "code": -32602, 00:42:02.876 "message": "Invalid parameters" 00:42:02.876 } 00:42:02.876 07:37:14 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:42:02.876 07:37:14 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:42:02.876 07:37:14 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:02.876 07:37:14 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:02.876 07:37:14 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:03.137 07:37:14 keyring_file -- keyring/file.sh@47 -- # bperfpid=2729065 00:42:03.137 07:37:14 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2729065 /var/tmp/bperf.sock 00:42:03.137 07:37:14 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2729065 ']' 00:42:03.137 07:37:14 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:42:03.137 07:37:14 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:03.137 07:37:14 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:03.137 07:37:14 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:03.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:03.137 07:37:14 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:03.137 07:37:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:03.137 [2024-11-27 07:37:14.132259] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:42:03.137 [2024-11-27 07:37:14.132322] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2729065 ] 00:42:03.137 [2024-11-27 07:37:14.223334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:03.137 [2024-11-27 07:37:14.276726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:04.080 07:37:14 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:04.080 07:37:14 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:42:04.080 07:37:14 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.38UUSoD8BV 00:42:04.080 07:37:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.38UUSoD8BV 00:42:04.080 07:37:15 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.QHp7aOopGi 00:42:04.080 07:37:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.QHp7aOopGi 00:42:04.341 07:37:15 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:42:04.341 07:37:15 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:42:04.341 07:37:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:04.341 07:37:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:04.341 07:37:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:04.341 07:37:15 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.38UUSoD8BV == \/\t\m\p\/\t\m\p\.\3\8\U\U\S\o\D\8\B\V ]] 00:42:04.341 07:37:15 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:42:04.341 07:37:15 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:42:04.341 07:37:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:04.341 07:37:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:04.341 07:37:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:04.602 07:37:15 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.QHp7aOopGi == \/\t\m\p\/\t\m\p\.\Q\H\p\7\a\O\o\p\G\i ]] 00:42:04.602 07:37:15 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:42:04.602 07:37:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:04.602 07:37:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:04.602 07:37:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:04.602 07:37:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:04.602 07:37:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:04.862 07:37:15 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:42:04.862 07:37:15 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:42:04.862 07:37:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:04.862 07:37:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:04.862 07:37:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:04.862 07:37:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:04.862 07:37:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:05.122 07:37:16 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:42:05.122 07:37:16 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:05.122 07:37:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:05.122 [2024-11-27 07:37:16.239408] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:05.122 nvme0n1 00:42:05.382 07:37:16 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:42:05.382 07:37:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:05.382 07:37:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:05.382 07:37:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:05.382 07:37:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:05.382 07:37:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:05.382 07:37:16 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:42:05.382 07:37:16 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:42:05.382 07:37:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:05.382 07:37:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:05.382 07:37:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:05.382 07:37:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:05.382 07:37:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:05.642 07:37:16 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:42:05.642 07:37:16 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:05.642 Running I/O for 1 seconds... 00:42:07.024 18378.00 IOPS, 71.79 MiB/s 00:42:07.024 Latency(us) 00:42:07.024 [2024-11-27T06:37:18.229Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:07.024 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:42:07.024 nvme0n1 : 1.00 18439.31 72.03 0.00 0.00 6929.54 3577.17 17585.49 00:42:07.024 [2024-11-27T06:37:18.229Z] =================================================================================================================== 00:42:07.024 [2024-11-27T06:37:18.229Z] Total : 18439.31 72.03 0.00 0.00 6929.54 3577.17 17585.49 00:42:07.024 { 00:42:07.024 "results": [ 00:42:07.024 { 00:42:07.024 "job": "nvme0n1", 00:42:07.024 "core_mask": "0x2", 00:42:07.024 "workload": "randrw", 00:42:07.024 "percentage": 50, 00:42:07.024 "status": "finished", 00:42:07.024 "queue_depth": 128, 00:42:07.024 "io_size": 4096, 00:42:07.024 "runtime": 1.003617, 00:42:07.024 "iops": 18439.305033693134, 00:42:07.024 "mibps": 72.0285352878638, 00:42:07.024 "io_failed": 0, 00:42:07.024 "io_timeout": 0, 00:42:07.024 "avg_latency_us": 6929.541796174214, 00:42:07.024 "min_latency_us": 3577.173333333333, 00:42:07.024 "max_latency_us": 17585.493333333332 00:42:07.024 } 00:42:07.024 ], 00:42:07.024 "core_count": 1 00:42:07.024 } 00:42:07.024 07:37:17 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:07.024 07:37:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:07.024 07:37:17 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:42:07.024 07:37:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:07.024 07:37:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:07.024 07:37:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:07.024 07:37:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:07.024 07:37:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:07.024 07:37:18 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:42:07.024 07:37:18 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:42:07.024 07:37:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:07.024 07:37:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:07.024 07:37:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:07.024 07:37:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:07.024 07:37:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:07.283 07:37:18 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:42:07.283 07:37:18 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:07.283 07:37:18 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:42:07.283 07:37:18 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:07.283 07:37:18 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:42:07.283 07:37:18 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:07.284 07:37:18 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:42:07.284 07:37:18 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:07.284 07:37:18 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:07.284 07:37:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:07.544 [2024-11-27 07:37:18.501063] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:07.544 [2024-11-27 07:37:18.501761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a1c50 (107): Transport endpoint is not connected 00:42:07.544 [2024-11-27 07:37:18.502757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a1c50 (9): Bad file descriptor 00:42:07.544 [2024-11-27 07:37:18.503759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:42:07.544 [2024-11-27 07:37:18.503769] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:07.544 [2024-11-27 07:37:18.503775] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:07.544 [2024-11-27 07:37:18.503781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:42:07.544 request: 00:42:07.544 { 00:42:07.544 "name": "nvme0", 00:42:07.544 "trtype": "tcp", 00:42:07.544 "traddr": "127.0.0.1", 00:42:07.544 "adrfam": "ipv4", 00:42:07.544 "trsvcid": "4420", 00:42:07.544 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:07.544 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:07.544 "prchk_reftag": false, 00:42:07.544 "prchk_guard": false, 00:42:07.544 "hdgst": false, 00:42:07.544 "ddgst": false, 00:42:07.544 "psk": "key1", 00:42:07.544 "allow_unrecognized_csi": false, 00:42:07.544 "method": "bdev_nvme_attach_controller", 00:42:07.544 "req_id": 1 00:42:07.544 } 00:42:07.544 Got JSON-RPC error response 00:42:07.544 response: 00:42:07.544 { 00:42:07.544 "code": -5, 00:42:07.544 "message": "Input/output error" 00:42:07.544 } 00:42:07.544 07:37:18 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:42:07.544 07:37:18 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:07.544 07:37:18 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:07.544 07:37:18 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:07.544 07:37:18 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:42:07.544 07:37:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:07.544 07:37:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:07.544 07:37:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:07.544 07:37:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:07.544 07:37:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:07.544 07:37:18 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:42:07.544 07:37:18 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:42:07.544 07:37:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:07.544 07:37:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:07.544 07:37:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:07.544 07:37:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:07.544 07:37:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:07.804 07:37:18 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:42:07.804 07:37:18 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:42:07.804 07:37:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:08.063 07:37:19 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:42:08.063 07:37:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:42:08.063 07:37:19 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:42:08.063 07:37:19 keyring_file -- keyring/file.sh@78 -- # jq length 00:42:08.063 07:37:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:08.322 07:37:19 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:42:08.322 07:37:19 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.38UUSoD8BV 00:42:08.322 07:37:19 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.38UUSoD8BV 00:42:08.322 07:37:19 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:42:08.322 07:37:19 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.38UUSoD8BV 00:42:08.322 07:37:19 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:42:08.322 07:37:19 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:08.322 07:37:19 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:42:08.322 07:37:19 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:08.322 07:37:19 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.38UUSoD8BV 00:42:08.322 07:37:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.38UUSoD8BV 00:42:08.581 [2024-11-27 07:37:19.577560] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.38UUSoD8BV': 0100660 00:42:08.581 [2024-11-27 07:37:19.577578] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:42:08.581 request: 00:42:08.581 { 00:42:08.581 "name": "key0", 00:42:08.581 "path": "/tmp/tmp.38UUSoD8BV", 00:42:08.581 "method": "keyring_file_add_key", 00:42:08.581 "req_id": 1 00:42:08.581 } 00:42:08.581 Got JSON-RPC error response 00:42:08.581 response: 00:42:08.581 { 00:42:08.581 "code": -1, 00:42:08.581 "message": "Operation not permitted" 00:42:08.581 } 00:42:08.581 07:37:19 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:42:08.581 07:37:19 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:08.581 07:37:19 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:08.581 07:37:19 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:08.581 07:37:19 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.38UUSoD8BV 00:42:08.581 07:37:19 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.38UUSoD8BV 00:42:08.581 07:37:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.38UUSoD8BV 00:42:08.841 07:37:19 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.38UUSoD8BV 00:42:08.841 07:37:19 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:42:08.841 07:37:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:08.841 07:37:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:08.841 07:37:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:08.841 07:37:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:08.841 07:37:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:08.841 07:37:19 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:42:08.841 07:37:19 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:08.841 07:37:19 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:42:08.841 07:37:19 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:08.841 07:37:19 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:42:08.841 07:37:19 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:08.841 07:37:19 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:42:08.841 07:37:19 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:08.841 07:37:19 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:08.841 07:37:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:09.102 [2024-11-27 07:37:20.147022] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.38UUSoD8BV': No such file or directory 00:42:09.102 [2024-11-27 07:37:20.147041] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:42:09.102 [2024-11-27 07:37:20.147054] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:42:09.102 [2024-11-27 07:37:20.147060] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:42:09.102 [2024-11-27 07:37:20.147066] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:42:09.102 [2024-11-27 07:37:20.147071] bdev_nvme.c:6769:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:42:09.102 request: 00:42:09.102 { 00:42:09.102 "name": "nvme0", 00:42:09.102 "trtype": "tcp", 00:42:09.102 "traddr": "127.0.0.1", 00:42:09.102 "adrfam": "ipv4", 00:42:09.102 "trsvcid": "4420", 00:42:09.102 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:09.102 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:09.102 "prchk_reftag": false, 00:42:09.102 "prchk_guard": false, 00:42:09.102 "hdgst": false, 00:42:09.102 "ddgst": false, 00:42:09.102 "psk": "key0", 00:42:09.102 "allow_unrecognized_csi": false, 00:42:09.102 "method": "bdev_nvme_attach_controller", 00:42:09.102 "req_id": 1 00:42:09.102 } 00:42:09.102 Got JSON-RPC error response 00:42:09.102 response: 00:42:09.102 { 00:42:09.102 "code": -19, 00:42:09.102 "message": "No such device" 00:42:09.102 } 00:42:09.102 07:37:20 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:42:09.102 07:37:20 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:09.102 07:37:20 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:09.102 07:37:20 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:09.102 07:37:20 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:42:09.102 07:37:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:09.362 07:37:20 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:09.362 07:37:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:09.362 07:37:20 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:09.362 07:37:20 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:09.362 07:37:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:09.362 07:37:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:09.362 07:37:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.90dnixzrPJ 00:42:09.362 07:37:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:09.362 07:37:20 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:09.362 07:37:20 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:09.362 07:37:20 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:09.362 07:37:20 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:09.362 07:37:20 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:09.362 07:37:20 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:09.362 07:37:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.90dnixzrPJ 00:42:09.362 07:37:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.90dnixzrPJ 00:42:09.362 07:37:20 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.90dnixzrPJ 00:42:09.362 07:37:20 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.90dnixzrPJ 00:42:09.362 07:37:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.90dnixzrPJ 00:42:09.362 07:37:20 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:09.362 07:37:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:09.621 nvme0n1 00:42:09.622 07:37:20 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:42:09.622 07:37:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:09.622 07:37:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:09.622 07:37:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:09.622 07:37:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:09.622 07:37:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:09.881 07:37:20 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:42:09.881 07:37:20 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:42:09.881 07:37:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:10.140 07:37:21 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:42:10.141 07:37:21 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:42:10.141 07:37:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:10.141 07:37:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:10.141 07:37:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:10.141 07:37:21 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:42:10.141 07:37:21 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:42:10.141 07:37:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:10.141 07:37:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:10.141 07:37:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:10.141 07:37:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:10.141 07:37:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:10.400 07:37:21 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:42:10.400 07:37:21 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:10.400 07:37:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:10.659 07:37:21 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:42:10.659 07:37:21 keyring_file -- keyring/file.sh@105 -- # jq length 00:42:10.659 07:37:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:10.921 07:37:21 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:42:10.921 07:37:21 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.90dnixzrPJ 00:42:10.921 07:37:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.90dnixzrPJ 00:42:10.921 07:37:22 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.QHp7aOopGi 00:42:10.921 07:37:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.QHp7aOopGi 00:42:11.182 07:37:22 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:11.182 07:37:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:11.441 nvme0n1 00:42:11.442 07:37:22 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:42:11.442 07:37:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:42:11.702 07:37:22 keyring_file -- keyring/file.sh@113 -- # config='{ 00:42:11.702 "subsystems": [ 00:42:11.702 { 00:42:11.702 "subsystem": "keyring", 00:42:11.702 "config": [ 00:42:11.702 { 00:42:11.702 "method": "keyring_file_add_key", 00:42:11.702 "params": { 00:42:11.702 "name": "key0", 00:42:11.702 "path": "/tmp/tmp.90dnixzrPJ" 00:42:11.702 } 00:42:11.702 }, 00:42:11.702 { 00:42:11.702 "method": "keyring_file_add_key", 00:42:11.702 "params": { 00:42:11.702 "name": "key1", 00:42:11.702 "path": "/tmp/tmp.QHp7aOopGi" 00:42:11.702 } 00:42:11.702 } 00:42:11.702 ] 00:42:11.702 }, 00:42:11.702 { 00:42:11.702 "subsystem": "iobuf", 00:42:11.702 "config": [ 00:42:11.702 { 00:42:11.702 "method": "iobuf_set_options", 00:42:11.702 "params": { 00:42:11.702 "small_pool_count": 8192, 00:42:11.702 "large_pool_count": 1024, 00:42:11.702 "small_bufsize": 8192, 00:42:11.702 "large_bufsize": 135168, 00:42:11.702 "enable_numa": false 00:42:11.702 } 00:42:11.702 } 00:42:11.702 ] 00:42:11.702 }, 00:42:11.702 { 00:42:11.702 "subsystem": "sock", 00:42:11.702 "config": [ 00:42:11.702 { 00:42:11.702 "method": "sock_set_default_impl", 00:42:11.703 "params": { 00:42:11.703 "impl_name": "posix" 00:42:11.703 } 00:42:11.703 }, 00:42:11.703 { 00:42:11.703 "method": "sock_impl_set_options", 00:42:11.703 "params": { 00:42:11.703 "impl_name": "ssl", 00:42:11.703 "recv_buf_size": 4096, 00:42:11.703 "send_buf_size": 4096, 00:42:11.703 "enable_recv_pipe": true, 00:42:11.703 "enable_quickack": false, 00:42:11.703 "enable_placement_id": 0, 00:42:11.703 "enable_zerocopy_send_server": true, 00:42:11.703 "enable_zerocopy_send_client": false, 00:42:11.703 "zerocopy_threshold": 0, 00:42:11.703 "tls_version": 0, 00:42:11.703 "enable_ktls": false 00:42:11.703 } 00:42:11.703 }, 00:42:11.703 { 00:42:11.703 "method": "sock_impl_set_options", 00:42:11.703 "params": { 00:42:11.703 "impl_name": "posix", 00:42:11.703 "recv_buf_size": 2097152, 00:42:11.703 "send_buf_size": 2097152, 00:42:11.703 "enable_recv_pipe": true, 00:42:11.703 "enable_quickack": false, 00:42:11.703 "enable_placement_id": 0, 00:42:11.703 "enable_zerocopy_send_server": true, 00:42:11.703 "enable_zerocopy_send_client": false, 00:42:11.703 "zerocopy_threshold": 0, 00:42:11.703 "tls_version": 0, 00:42:11.703 "enable_ktls": false 00:42:11.703 } 00:42:11.703 } 00:42:11.703 ] 00:42:11.703 }, 00:42:11.703 { 00:42:11.703 "subsystem": "vmd", 00:42:11.703 "config": [] 00:42:11.703 }, 00:42:11.703 { 00:42:11.703 "subsystem": "accel", 00:42:11.703 "config": [ 00:42:11.703 { 00:42:11.703 "method": "accel_set_options", 00:42:11.703 "params": { 00:42:11.703 "small_cache_size": 128, 00:42:11.703 "large_cache_size": 16, 00:42:11.703 "task_count": 2048, 00:42:11.703 "sequence_count": 2048, 00:42:11.703 "buf_count": 2048 00:42:11.703 } 00:42:11.703 } 00:42:11.703 ] 00:42:11.703 }, 00:42:11.703 { 00:42:11.703 "subsystem": "bdev", 00:42:11.703 "config": [ 00:42:11.703 { 00:42:11.703 "method": "bdev_set_options", 00:42:11.703 "params": { 00:42:11.703 "bdev_io_pool_size": 65535, 00:42:11.703 "bdev_io_cache_size": 256, 00:42:11.703 "bdev_auto_examine": true, 00:42:11.703 "iobuf_small_cache_size": 128, 00:42:11.703 "iobuf_large_cache_size": 16 00:42:11.703 } 00:42:11.703 }, 00:42:11.703 { 00:42:11.703 "method": "bdev_raid_set_options", 00:42:11.703 "params": { 00:42:11.703 "process_window_size_kb": 1024, 00:42:11.703 "process_max_bandwidth_mb_sec": 0 00:42:11.703 } 00:42:11.703 }, 00:42:11.703 { 00:42:11.703 "method": "bdev_iscsi_set_options", 00:42:11.703 "params": { 00:42:11.703 "timeout_sec": 30 00:42:11.703 } 00:42:11.703 }, 00:42:11.703 { 00:42:11.703 "method": "bdev_nvme_set_options", 00:42:11.703 "params": { 00:42:11.703 "action_on_timeout": "none", 00:42:11.703 "timeout_us": 0, 00:42:11.703 "timeout_admin_us": 0, 00:42:11.703 "keep_alive_timeout_ms": 10000, 00:42:11.703 "arbitration_burst": 0, 00:42:11.703 "low_priority_weight": 0, 00:42:11.703 "medium_priority_weight": 0, 00:42:11.703 "high_priority_weight": 0, 00:42:11.703 "nvme_adminq_poll_period_us": 10000, 00:42:11.703 "nvme_ioq_poll_period_us": 0, 00:42:11.703 "io_queue_requests": 512, 00:42:11.703 "delay_cmd_submit": true, 00:42:11.703 "transport_retry_count": 4, 00:42:11.703 "bdev_retry_count": 3, 00:42:11.703 "transport_ack_timeout": 0, 00:42:11.703 "ctrlr_loss_timeout_sec": 0, 00:42:11.703 "reconnect_delay_sec": 0, 00:42:11.703 "fast_io_fail_timeout_sec": 0, 00:42:11.703 "disable_auto_failback": false, 00:42:11.703 "generate_uuids": false, 00:42:11.703 "transport_tos": 0, 00:42:11.703 "nvme_error_stat": false, 00:42:11.703 "rdma_srq_size": 0, 00:42:11.703 "io_path_stat": false, 00:42:11.703 "allow_accel_sequence": false, 00:42:11.703 "rdma_max_cq_size": 0, 00:42:11.703 "rdma_cm_event_timeout_ms": 0, 00:42:11.703 "dhchap_digests": [ 00:42:11.703 "sha256", 00:42:11.703 "sha384", 00:42:11.703 "sha512" 00:42:11.703 ], 00:42:11.703 "dhchap_dhgroups": [ 00:42:11.703 "null", 00:42:11.703 "ffdhe2048", 00:42:11.703 "ffdhe3072", 00:42:11.703 "ffdhe4096", 00:42:11.703 "ffdhe6144", 00:42:11.703 "ffdhe8192" 00:42:11.703 ] 00:42:11.703 } 00:42:11.703 }, 00:42:11.703 { 00:42:11.703 "method": "bdev_nvme_attach_controller", 00:42:11.703 "params": { 00:42:11.703 "name": "nvme0", 00:42:11.703 "trtype": "TCP", 00:42:11.703 "adrfam": "IPv4", 00:42:11.703 "traddr": "127.0.0.1", 00:42:11.703 "trsvcid": "4420", 00:42:11.703 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:11.703 "prchk_reftag": false, 00:42:11.703 "prchk_guard": false, 00:42:11.703 "ctrlr_loss_timeout_sec": 0, 00:42:11.703 "reconnect_delay_sec": 0, 00:42:11.703 "fast_io_fail_timeout_sec": 0, 00:42:11.703 "psk": "key0", 00:42:11.703 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:11.703 "hdgst": false, 00:42:11.703 "ddgst": false, 00:42:11.703 "multipath": "multipath" 00:42:11.703 } 00:42:11.703 }, 00:42:11.703 { 00:42:11.703 "method": "bdev_nvme_set_hotplug", 00:42:11.703 "params": { 00:42:11.703 "period_us": 100000, 00:42:11.703 "enable": false 00:42:11.703 } 00:42:11.703 }, 00:42:11.703 { 00:42:11.703 "method": "bdev_wait_for_examine" 00:42:11.703 } 00:42:11.703 ] 00:42:11.703 }, 00:42:11.703 { 00:42:11.703 "subsystem": "nbd", 00:42:11.703 "config": [] 00:42:11.703 } 00:42:11.703 ] 00:42:11.703 }' 00:42:11.703 07:37:22 keyring_file -- keyring/file.sh@115 -- # killprocess 2729065 00:42:11.703 07:37:22 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2729065 ']' 00:42:11.703 07:37:22 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2729065 00:42:11.703 07:37:22 keyring_file -- common/autotest_common.sh@959 -- # uname 00:42:11.703 07:37:22 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:11.703 07:37:22 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2729065 00:42:11.703 07:37:22 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:11.703 07:37:22 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:11.703 07:37:22 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2729065' 00:42:11.703 killing process with pid 2729065 00:42:11.703 07:37:22 keyring_file -- common/autotest_common.sh@973 -- # kill 2729065 00:42:11.703 Received shutdown signal, test time was about 1.000000 seconds 00:42:11.703 00:42:11.703 Latency(us) 00:42:11.703 [2024-11-27T06:37:22.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:11.703 [2024-11-27T06:37:22.908Z] =================================================================================================================== 00:42:11.703 [2024-11-27T06:37:22.908Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:11.703 07:37:22 keyring_file -- common/autotest_common.sh@978 -- # wait 2729065 00:42:11.703 07:37:22 keyring_file -- keyring/file.sh@118 -- # bperfpid=2730870 00:42:11.703 07:37:22 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2730870 /var/tmp/bperf.sock 00:42:11.704 07:37:22 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2730870 ']' 00:42:11.704 07:37:22 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:11.704 07:37:22 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:42:11.704 07:37:22 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:11.704 07:37:22 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:11.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:11.704 07:37:22 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:11.704 07:37:22 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:42:11.704 "subsystems": [ 00:42:11.704 { 00:42:11.704 "subsystem": "keyring", 00:42:11.704 "config": [ 00:42:11.704 { 00:42:11.704 "method": "keyring_file_add_key", 00:42:11.704 "params": { 00:42:11.704 "name": "key0", 00:42:11.704 "path": "/tmp/tmp.90dnixzrPJ" 00:42:11.704 } 00:42:11.704 }, 00:42:11.704 { 00:42:11.704 "method": "keyring_file_add_key", 00:42:11.704 "params": { 00:42:11.704 "name": "key1", 00:42:11.704 "path": "/tmp/tmp.QHp7aOopGi" 00:42:11.704 } 00:42:11.704 } 00:42:11.704 ] 00:42:11.704 }, 00:42:11.704 { 00:42:11.704 "subsystem": "iobuf", 00:42:11.704 "config": [ 00:42:11.704 { 00:42:11.704 "method": "iobuf_set_options", 00:42:11.704 "params": { 00:42:11.704 "small_pool_count": 8192, 00:42:11.704 "large_pool_count": 1024, 00:42:11.704 "small_bufsize": 8192, 00:42:11.704 "large_bufsize": 135168, 00:42:11.704 "enable_numa": false 00:42:11.704 } 00:42:11.704 } 00:42:11.704 ] 00:42:11.704 }, 00:42:11.704 { 00:42:11.704 "subsystem": "sock", 00:42:11.704 "config": [ 00:42:11.704 { 00:42:11.704 "method": "sock_set_default_impl", 00:42:11.704 "params": { 00:42:11.704 "impl_name": "posix" 00:42:11.704 } 00:42:11.704 }, 00:42:11.704 { 00:42:11.704 "method": "sock_impl_set_options", 00:42:11.704 "params": { 00:42:11.704 "impl_name": "ssl", 00:42:11.704 "recv_buf_size": 4096, 00:42:11.704 "send_buf_size": 4096, 00:42:11.704 "enable_recv_pipe": true, 00:42:11.704 "enable_quickack": false, 00:42:11.704 "enable_placement_id": 0, 00:42:11.704 "enable_zerocopy_send_server": true, 00:42:11.704 "enable_zerocopy_send_client": false, 00:42:11.704 "zerocopy_threshold": 0, 00:42:11.704 "tls_version": 0, 00:42:11.704 "enable_ktls": false 00:42:11.704 } 00:42:11.704 }, 00:42:11.704 { 00:42:11.704 "method": "sock_impl_set_options", 00:42:11.704 "params": { 00:42:11.704 "impl_name": "posix", 00:42:11.704 "recv_buf_size": 2097152, 00:42:11.704 "send_buf_size": 2097152, 00:42:11.704 "enable_recv_pipe": true, 00:42:11.704 "enable_quickack": false, 00:42:11.704 "enable_placement_id": 0, 00:42:11.704 "enable_zerocopy_send_server": true, 00:42:11.704 "enable_zerocopy_send_client": false, 00:42:11.704 "zerocopy_threshold": 0, 00:42:11.704 "tls_version": 0, 00:42:11.704 "enable_ktls": false 00:42:11.704 } 00:42:11.704 } 00:42:11.704 ] 00:42:11.704 }, 00:42:11.704 { 00:42:11.704 "subsystem": "vmd", 00:42:11.704 "config": [] 00:42:11.704 }, 00:42:11.704 { 00:42:11.704 "subsystem": "accel", 00:42:11.704 "config": [ 00:42:11.704 { 00:42:11.704 "method": "accel_set_options", 00:42:11.704 "params": { 00:42:11.704 "small_cache_size": 128, 00:42:11.704 "large_cache_size": 16, 00:42:11.704 "task_count": 2048, 00:42:11.704 "sequence_count": 2048, 00:42:11.704 "buf_count": 2048 00:42:11.704 } 00:42:11.704 } 00:42:11.704 ] 00:42:11.704 }, 00:42:11.704 { 00:42:11.704 "subsystem": "bdev", 00:42:11.704 "config": [ 00:42:11.704 { 00:42:11.704 "method": "bdev_set_options", 00:42:11.704 "params": { 00:42:11.704 "bdev_io_pool_size": 65535, 00:42:11.704 "bdev_io_cache_size": 256, 00:42:11.704 "bdev_auto_examine": true, 00:42:11.704 "iobuf_small_cache_size": 128, 00:42:11.704 "iobuf_large_cache_size": 16 00:42:11.704 } 00:42:11.704 }, 00:42:11.704 { 00:42:11.704 "method": "bdev_raid_set_options", 00:42:11.704 "params": { 00:42:11.704 "process_window_size_kb": 1024, 00:42:11.704 "process_max_bandwidth_mb_sec": 0 00:42:11.704 } 00:42:11.704 }, 00:42:11.704 { 00:42:11.704 "method": "bdev_iscsi_set_options", 00:42:11.704 "params": { 00:42:11.704 "timeout_sec": 30 00:42:11.704 } 00:42:11.704 }, 00:42:11.704 { 00:42:11.704 "method": "bdev_nvme_set_options", 00:42:11.704 "params": { 00:42:11.704 "action_on_timeout": "none", 00:42:11.704 "timeout_us": 0, 00:42:11.704 "timeout_admin_us": 0, 00:42:11.704 "keep_alive_timeout_ms": 10000, 00:42:11.704 "arbitration_burst": 0, 00:42:11.704 "low_priority_weight": 0, 00:42:11.704 "medium_priority_weight": 0, 00:42:11.704 "high_priority_weight": 0, 00:42:11.704 "nvme_adminq_poll_period_us": 10000, 00:42:11.704 "nvme_ioq_poll_period_us": 0, 00:42:11.704 "io_queue_requests": 512, 00:42:11.704 "delay_cmd_submit": true, 00:42:11.704 "transport_retry_count": 4, 00:42:11.704 "bdev_retry_count": 3, 00:42:11.704 "transport_ack_timeout": 0, 00:42:11.704 "ctrlr_loss_timeout_sec": 0, 00:42:11.704 "reconnect_delay_sec": 0, 00:42:11.704 "fast_io_fail_timeout_sec": 0, 00:42:11.704 "disable_auto_failback": false, 00:42:11.704 "generate_uuids": false, 00:42:11.704 "transport_tos": 0, 00:42:11.704 "nvme_error_stat": false, 00:42:11.704 "rdma_srq_size": 0, 00:42:11.704 "io_path_stat": false, 00:42:11.704 "allow_accel_sequence": false, 00:42:11.704 "rdma_max_cq_size": 0, 00:42:11.704 "rdma_cm_event_timeout_ms": 0, 00:42:11.704 "dhchap_digests": [ 00:42:11.704 "sha256", 00:42:11.704 "sha384", 00:42:11.704 "sha512" 00:42:11.704 ], 00:42:11.704 "dhchap_dhgroups": [ 00:42:11.704 "null", 00:42:11.704 "ffdhe2048", 00:42:11.704 "ffdhe3072", 00:42:11.704 "ffdhe4096", 00:42:11.704 "ffdhe6144", 00:42:11.704 "ffdhe8192" 00:42:11.704 ] 00:42:11.704 } 00:42:11.704 }, 00:42:11.704 { 00:42:11.704 "method": "bdev_nvme_attach_controller", 00:42:11.704 "params": { 00:42:11.704 "name": "nvme0", 00:42:11.704 "trtype": "TCP", 00:42:11.704 "adrfam": "IPv4", 00:42:11.704 "traddr": "127.0.0.1", 00:42:11.704 "trsvcid": "4420", 00:42:11.704 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:11.704 "prchk_reftag": false, 00:42:11.704 "prchk_guard": false, 00:42:11.704 "ctrlr_loss_timeout_sec": 0, 00:42:11.704 "reconnect_delay_sec": 0, 00:42:11.704 "fast_io_fail_timeout_sec": 0, 00:42:11.704 "psk": "key0", 00:42:11.704 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:11.704 "hdgst": false, 00:42:11.704 "ddgst": false, 00:42:11.704 "multipath": "multipath" 00:42:11.704 } 00:42:11.704 }, 00:42:11.704 { 00:42:11.704 "method": "bdev_nvme_set_hotplug", 00:42:11.704 "params": { 00:42:11.704 "period_us": 100000, 00:42:11.704 "enable": false 00:42:11.704 } 00:42:11.704 }, 00:42:11.704 { 00:42:11.704 "method": "bdev_wait_for_examine" 00:42:11.704 } 00:42:11.704 ] 00:42:11.704 }, 00:42:11.704 { 00:42:11.704 "subsystem": "nbd", 00:42:11.704 "config": [] 00:42:11.704 } 00:42:11.704 ] 00:42:11.704 }' 00:42:11.704 07:37:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:11.704 [2024-11-27 07:37:22.889641] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:42:11.704 [2024-11-27 07:37:22.889698] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2730870 ] 00:42:11.965 [2024-11-27 07:37:22.971192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:11.965 [2024-11-27 07:37:23.000497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:11.965 [2024-11-27 07:37:23.144188] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:12.536 07:37:23 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:12.536 07:37:23 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:42:12.536 07:37:23 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:42:12.536 07:37:23 keyring_file -- keyring/file.sh@121 -- # jq length 00:42:12.536 07:37:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:12.796 07:37:23 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:42:12.796 07:37:23 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:42:12.796 07:37:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:12.796 07:37:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:12.796 07:37:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:12.796 07:37:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:12.796 07:37:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:13.056 07:37:24 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:42:13.056 07:37:24 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:42:13.056 07:37:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:13.056 07:37:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:13.056 07:37:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:13.056 07:37:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:13.056 07:37:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:13.056 07:37:24 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:42:13.056 07:37:24 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:42:13.056 07:37:24 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:42:13.056 07:37:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:42:13.316 07:37:24 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:42:13.316 07:37:24 keyring_file -- keyring/file.sh@1 -- # cleanup 00:42:13.316 07:37:24 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.90dnixzrPJ /tmp/tmp.QHp7aOopGi 00:42:13.316 07:37:24 keyring_file -- keyring/file.sh@20 -- # killprocess 2730870 00:42:13.316 07:37:24 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2730870 ']' 00:42:13.316 07:37:24 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2730870 00:42:13.316 07:37:24 keyring_file -- common/autotest_common.sh@959 -- # uname 00:42:13.316 07:37:24 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:13.316 07:37:24 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2730870 00:42:13.316 07:37:24 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:13.316 07:37:24 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:13.316 07:37:24 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2730870' 00:42:13.316 killing process with pid 2730870 00:42:13.316 07:37:24 keyring_file -- common/autotest_common.sh@973 -- # kill 2730870 00:42:13.316 Received shutdown signal, test time was about 1.000000 seconds 00:42:13.316 00:42:13.316 Latency(us) 00:42:13.316 [2024-11-27T06:37:24.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:13.316 [2024-11-27T06:37:24.521Z] =================================================================================================================== 00:42:13.316 [2024-11-27T06:37:24.521Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:42:13.316 07:37:24 keyring_file -- common/autotest_common.sh@978 -- # wait 2730870 00:42:13.577 07:37:24 keyring_file -- keyring/file.sh@21 -- # killprocess 2729011 00:42:13.577 07:37:24 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2729011 ']' 00:42:13.577 07:37:24 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2729011 00:42:13.577 07:37:24 keyring_file -- common/autotest_common.sh@959 -- # uname 00:42:13.577 07:37:24 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:13.577 07:37:24 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2729011 00:42:13.577 07:37:24 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:13.577 07:37:24 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:13.577 07:37:24 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2729011' 00:42:13.577 killing process with pid 2729011 00:42:13.577 07:37:24 keyring_file -- common/autotest_common.sh@973 -- # kill 2729011 00:42:13.577 07:37:24 keyring_file -- common/autotest_common.sh@978 -- # wait 2729011 00:42:13.837 00:42:13.837 real 0m12.046s 00:42:13.837 user 0m28.969s 00:42:13.837 sys 0m2.766s 00:42:13.837 07:37:24 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:13.837 07:37:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:13.837 ************************************ 00:42:13.837 END TEST keyring_file 00:42:13.837 ************************************ 00:42:13.837 07:37:24 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:42:13.837 07:37:24 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:13.837 07:37:24 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:13.837 07:37:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:13.837 07:37:24 -- common/autotest_common.sh@10 -- # set +x 00:42:13.837 ************************************ 00:42:13.837 START TEST keyring_linux 00:42:13.837 ************************************ 00:42:13.837 07:37:24 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:13.837 Joined session keyring: 922838936 00:42:13.837 * Looking for test storage... 00:42:13.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:13.837 07:37:24 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:13.837 07:37:25 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:42:13.837 07:37:25 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:14.099 07:37:25 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:14.099 07:37:25 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:14.099 07:37:25 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:14.099 07:37:25 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:14.099 07:37:25 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:42:14.099 07:37:25 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:42:14.099 07:37:25 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:42:14.099 07:37:25 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:42:14.099 07:37:25 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:42:14.099 07:37:25 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:42:14.099 07:37:25 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:42:14.099 07:37:25 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:14.099 07:37:25 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:42:14.099 07:37:25 keyring_linux -- scripts/common.sh@345 -- # : 1 00:42:14.099 07:37:25 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:14.099 07:37:25 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:14.099 07:37:25 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:42:14.099 07:37:25 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:42:14.099 07:37:25 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:14.099 07:37:25 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:42:14.099 07:37:25 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:42:14.099 07:37:25 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:42:14.099 07:37:25 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:42:14.099 07:37:25 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:14.099 07:37:25 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:42:14.099 07:37:25 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:42:14.099 07:37:25 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:14.099 07:37:25 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:14.099 07:37:25 keyring_linux -- scripts/common.sh@368 -- # return 0 00:42:14.099 07:37:25 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:14.099 07:37:25 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:14.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:14.099 --rc genhtml_branch_coverage=1 00:42:14.099 --rc genhtml_function_coverage=1 00:42:14.099 --rc genhtml_legend=1 00:42:14.099 --rc geninfo_all_blocks=1 00:42:14.099 --rc geninfo_unexecuted_blocks=1 00:42:14.099 00:42:14.099 ' 00:42:14.099 07:37:25 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:14.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:14.099 --rc genhtml_branch_coverage=1 00:42:14.099 --rc genhtml_function_coverage=1 00:42:14.099 --rc genhtml_legend=1 00:42:14.099 --rc geninfo_all_blocks=1 00:42:14.099 --rc geninfo_unexecuted_blocks=1 00:42:14.099 00:42:14.099 ' 00:42:14.099 07:37:25 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:14.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:14.099 --rc genhtml_branch_coverage=1 00:42:14.099 --rc genhtml_function_coverage=1 00:42:14.099 --rc genhtml_legend=1 00:42:14.099 --rc geninfo_all_blocks=1 00:42:14.099 --rc geninfo_unexecuted_blocks=1 00:42:14.099 00:42:14.099 ' 00:42:14.099 07:37:25 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:14.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:14.099 --rc genhtml_branch_coverage=1 00:42:14.099 --rc genhtml_function_coverage=1 00:42:14.099 --rc genhtml_legend=1 00:42:14.099 --rc geninfo_all_blocks=1 00:42:14.099 --rc geninfo_unexecuted_blocks=1 00:42:14.099 00:42:14.099 ' 00:42:14.099 07:37:25 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:14.099 07:37:25 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:14.099 07:37:25 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:42:14.099 07:37:25 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:14.099 07:37:25 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:14.099 07:37:25 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:14.099 07:37:25 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:14.099 07:37:25 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:14.099 07:37:25 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:14.099 07:37:25 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:14.099 07:37:25 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:14.099 07:37:25 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:14.099 07:37:25 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:14.099 07:37:25 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:14.099 07:37:25 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:14.099 07:37:25 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:14.099 07:37:25 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:14.099 07:37:25 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:14.099 07:37:25 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:14.099 07:37:25 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:14.099 07:37:25 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:42:14.099 07:37:25 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:14.099 07:37:25 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:14.099 07:37:25 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:14.099 07:37:25 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:14.099 07:37:25 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:14.099 07:37:25 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:14.100 07:37:25 keyring_linux -- paths/export.sh@5 -- # export PATH 00:42:14.100 07:37:25 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:14.100 07:37:25 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:42:14.100 07:37:25 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:14.100 07:37:25 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:14.100 07:37:25 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:14.100 07:37:25 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:14.100 07:37:25 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:14.100 07:37:25 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:14.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:14.100 07:37:25 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:14.100 07:37:25 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:14.100 07:37:25 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:14.100 07:37:25 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:14.100 07:37:25 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:14.100 07:37:25 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:14.100 07:37:25 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:42:14.100 07:37:25 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:42:14.100 07:37:25 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:42:14.100 07:37:25 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:42:14.100 07:37:25 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:14.100 07:37:25 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:42:14.100 07:37:25 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:14.100 07:37:25 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:14.100 07:37:25 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:42:14.100 07:37:25 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:14.100 07:37:25 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:14.100 07:37:25 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:42:14.100 07:37:25 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:14.100 07:37:25 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:14.100 07:37:25 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:42:14.100 07:37:25 keyring_linux -- nvmf/common.sh@733 -- # python - 00:42:14.100 07:37:25 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:42:14.100 07:37:25 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:42:14.100 /tmp/:spdk-test:key0 00:42:14.100 07:37:25 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:42:14.100 07:37:25 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:14.100 07:37:25 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:42:14.100 07:37:25 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:14.100 07:37:25 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:14.100 07:37:25 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:42:14.100 07:37:25 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:14.100 07:37:25 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:14.100 07:37:25 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:42:14.100 07:37:25 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:14.100 07:37:25 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:42:14.100 07:37:25 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:42:14.100 07:37:25 keyring_linux -- nvmf/common.sh@733 -- # python - 00:42:14.100 07:37:25 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:42:14.100 07:37:25 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:42:14.100 /tmp/:spdk-test:key1 00:42:14.100 07:37:25 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2731339 00:42:14.100 07:37:25 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2731339 00:42:14.100 07:37:25 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:14.100 07:37:25 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2731339 ']' 00:42:14.100 07:37:25 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:14.100 07:37:25 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:14.100 07:37:25 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:14.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:14.100 07:37:25 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:14.100 07:37:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:14.100 [2024-11-27 07:37:25.279317] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:42:14.100 [2024-11-27 07:37:25.279376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2731339 ] 00:42:14.360 [2024-11-27 07:37:25.359973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:14.360 [2024-11-27 07:37:25.390407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:14.931 07:37:26 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:14.931 07:37:26 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:42:14.931 07:37:26 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:42:14.931 07:37:26 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.931 07:37:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:14.931 [2024-11-27 07:37:26.058033] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:14.931 null0 00:42:14.931 [2024-11-27 07:37:26.090093] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:14.931 [2024-11-27 07:37:26.090448] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:14.931 07:37:26 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.931 07:37:26 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:42:14.931 728235331 00:42:14.931 07:37:26 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:42:14.931 442783583 00:42:14.931 07:37:26 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2731646 00:42:14.931 07:37:26 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2731646 /var/tmp/bperf.sock 00:42:14.931 07:37:26 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:42:14.931 07:37:26 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2731646 ']' 00:42:14.931 07:37:26 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:14.931 07:37:26 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:14.931 07:37:26 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:14.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:14.931 07:37:26 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:14.931 07:37:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:15.191 [2024-11-27 07:37:26.169517] Starting SPDK v25.01-pre git sha1 4915847b4 / DPDK 24.03.0 initialization... 00:42:15.191 [2024-11-27 07:37:26.169568] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2731646 ] 00:42:15.191 [2024-11-27 07:37:26.252021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:15.191 [2024-11-27 07:37:26.281860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:16.132 07:37:26 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:16.132 07:37:26 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:42:16.132 07:37:26 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:42:16.132 07:37:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:42:16.132 07:37:27 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:42:16.132 07:37:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:42:16.392 07:37:27 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:16.392 07:37:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:16.392 [2024-11-27 07:37:27.490991] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:16.392 nvme0n1 00:42:16.392 07:37:27 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:42:16.392 07:37:27 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:42:16.392 07:37:27 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:16.392 07:37:27 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:16.392 07:37:27 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:16.392 07:37:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:16.653 07:37:27 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:42:16.653 07:37:27 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:16.653 07:37:27 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:42:16.653 07:37:27 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:42:16.653 07:37:27 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:16.653 07:37:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:16.653 07:37:27 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:42:16.913 07:37:27 keyring_linux -- keyring/linux.sh@25 -- # sn=728235331 00:42:16.913 07:37:27 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:42:16.913 07:37:27 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:16.913 07:37:27 keyring_linux -- keyring/linux.sh@26 -- # [[ 728235331 == \7\2\8\2\3\5\3\3\1 ]] 00:42:16.913 07:37:27 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 728235331 00:42:16.913 07:37:27 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:42:16.913 07:37:27 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:16.913 Running I/O for 1 seconds... 00:42:18.115 24683.00 IOPS, 96.42 MiB/s 00:42:18.115 Latency(us) 00:42:18.115 [2024-11-27T06:37:29.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:18.115 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:42:18.115 nvme0n1 : 1.01 24683.24 96.42 0.00 0.00 5170.57 4287.15 8956.59 00:42:18.115 [2024-11-27T06:37:29.320Z] =================================================================================================================== 00:42:18.115 [2024-11-27T06:37:29.320Z] Total : 24683.24 96.42 0.00 0.00 5170.57 4287.15 8956.59 00:42:18.115 { 00:42:18.115 "results": [ 00:42:18.115 { 00:42:18.115 "job": "nvme0n1", 00:42:18.115 "core_mask": "0x2", 00:42:18.115 "workload": "randread", 00:42:18.115 "status": "finished", 00:42:18.115 "queue_depth": 128, 00:42:18.115 "io_size": 4096, 00:42:18.115 "runtime": 1.005176, 00:42:18.115 "iops": 24683.23955207844, 00:42:18.115 "mibps": 96.41890450030641, 00:42:18.115 "io_failed": 0, 00:42:18.115 "io_timeout": 0, 00:42:18.115 "avg_latency_us": 5170.570876896, 00:42:18.115 "min_latency_us": 4287.1466666666665, 00:42:18.115 "max_latency_us": 8956.586666666666 00:42:18.115 } 00:42:18.115 ], 00:42:18.115 "core_count": 1 00:42:18.115 } 00:42:18.115 07:37:29 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:18.115 07:37:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:18.115 07:37:29 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:42:18.115 07:37:29 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:42:18.115 07:37:29 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:18.115 07:37:29 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:18.115 07:37:29 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:18.115 07:37:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:18.376 07:37:29 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:42:18.376 07:37:29 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:18.376 07:37:29 keyring_linux -- keyring/linux.sh@23 -- # return 00:42:18.376 07:37:29 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:18.376 07:37:29 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:42:18.376 07:37:29 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:18.376 07:37:29 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:42:18.376 07:37:29 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:18.376 07:37:29 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:42:18.376 07:37:29 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:18.376 07:37:29 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:18.376 07:37:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:18.376 [2024-11-27 07:37:29.579257] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:18.376 [2024-11-27 07:37:29.579744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf889e0 (107): Transport endpoint is not connected 00:42:18.637 [2024-11-27 07:37:29.580741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf889e0 (9): Bad file descriptor 00:42:18.637 [2024-11-27 07:37:29.581743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:42:18.637 [2024-11-27 07:37:29.581751] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:18.637 [2024-11-27 07:37:29.581757] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:18.637 [2024-11-27 07:37:29.581763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:42:18.637 request: 00:42:18.637 { 00:42:18.637 "name": "nvme0", 00:42:18.637 "trtype": "tcp", 00:42:18.637 "traddr": "127.0.0.1", 00:42:18.637 "adrfam": "ipv4", 00:42:18.637 "trsvcid": "4420", 00:42:18.637 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:18.637 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:18.637 "prchk_reftag": false, 00:42:18.637 "prchk_guard": false, 00:42:18.637 "hdgst": false, 00:42:18.637 "ddgst": false, 00:42:18.637 "psk": ":spdk-test:key1", 00:42:18.637 "allow_unrecognized_csi": false, 00:42:18.637 "method": "bdev_nvme_attach_controller", 00:42:18.637 "req_id": 1 00:42:18.637 } 00:42:18.637 Got JSON-RPC error response 00:42:18.637 response: 00:42:18.637 { 00:42:18.637 "code": -5, 00:42:18.637 "message": "Input/output error" 00:42:18.637 } 00:42:18.637 07:37:29 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:42:18.637 07:37:29 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:18.637 07:37:29 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:18.637 07:37:29 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:18.637 07:37:29 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:42:18.637 07:37:29 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:18.637 07:37:29 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:42:18.637 07:37:29 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:42:18.637 07:37:29 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:42:18.637 07:37:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:18.637 07:37:29 keyring_linux -- keyring/linux.sh@33 -- # sn=728235331 00:42:18.637 07:37:29 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 728235331 00:42:18.637 1 links removed 00:42:18.637 07:37:29 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:18.637 07:37:29 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:42:18.637 07:37:29 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:42:18.637 07:37:29 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:42:18.637 07:37:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:42:18.637 07:37:29 keyring_linux -- keyring/linux.sh@33 -- # sn=442783583 00:42:18.637 07:37:29 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 442783583 00:42:18.637 1 links removed 00:42:18.637 07:37:29 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2731646 00:42:18.637 07:37:29 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2731646 ']' 00:42:18.637 07:37:29 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2731646 00:42:18.637 07:37:29 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:42:18.637 07:37:29 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:18.637 07:37:29 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2731646 00:42:18.637 07:37:29 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:18.637 07:37:29 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:18.637 07:37:29 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2731646' 00:42:18.637 killing process with pid 2731646 00:42:18.637 07:37:29 keyring_linux -- common/autotest_common.sh@973 -- # kill 2731646 00:42:18.637 Received shutdown signal, test time was about 1.000000 seconds 00:42:18.637 00:42:18.637 Latency(us) 00:42:18.637 [2024-11-27T06:37:29.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:18.637 [2024-11-27T06:37:29.842Z] =================================================================================================================== 00:42:18.637 [2024-11-27T06:37:29.842Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:18.637 07:37:29 keyring_linux -- common/autotest_common.sh@978 -- # wait 2731646 00:42:18.637 07:37:29 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2731339 00:42:18.637 07:37:29 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2731339 ']' 00:42:18.637 07:37:29 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2731339 00:42:18.637 07:37:29 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:42:18.637 07:37:29 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:18.637 07:37:29 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2731339 00:42:18.898 07:37:29 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:18.898 07:37:29 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:18.898 07:37:29 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2731339' 00:42:18.898 killing process with pid 2731339 00:42:18.898 07:37:29 keyring_linux -- common/autotest_common.sh@973 -- # kill 2731339 00:42:18.898 07:37:29 keyring_linux -- common/autotest_common.sh@978 -- # wait 2731339 00:42:18.898 00:42:18.898 real 0m5.141s 00:42:18.898 user 0m9.554s 00:42:18.898 sys 0m1.414s 00:42:18.898 07:37:30 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:18.898 07:37:30 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:18.898 ************************************ 00:42:18.898 END TEST keyring_linux 00:42:18.898 ************************************ 00:42:18.898 07:37:30 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:42:18.898 07:37:30 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:42:18.898 07:37:30 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:42:18.898 07:37:30 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:42:18.898 07:37:30 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:42:18.898 07:37:30 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:42:18.898 07:37:30 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:42:18.898 07:37:30 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:42:18.898 07:37:30 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:42:18.898 07:37:30 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:42:18.898 07:37:30 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:42:18.898 07:37:30 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:42:18.898 07:37:30 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:42:18.898 07:37:30 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:42:18.898 07:37:30 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:42:18.898 07:37:30 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:42:18.898 07:37:30 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:42:18.898 07:37:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:18.898 07:37:30 -- common/autotest_common.sh@10 -- # set +x 00:42:18.898 07:37:30 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:42:18.898 07:37:30 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:42:18.898 07:37:30 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:42:18.898 07:37:30 -- common/autotest_common.sh@10 -- # set +x 00:42:27.035 INFO: APP EXITING 00:42:27.035 INFO: killing all VMs 00:42:27.035 INFO: killing vhost app 00:42:27.035 WARN: no vhost pid file found 00:42:27.035 INFO: EXIT DONE 00:42:30.334 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:42:30.334 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:42:30.334 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:42:30.334 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:42:30.334 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:42:30.334 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:42:30.334 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:42:30.334 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:42:30.334 0000:65:00.0 (144d a80a): Already using the nvme driver 00:42:30.334 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:42:30.334 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:42:30.334 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:42:30.334 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:42:30.334 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:42:30.334 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:42:30.334 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:42:30.334 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:42:34.543 Cleaning 00:42:34.543 Removing: /var/run/dpdk/spdk0/config 00:42:34.543 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:42:34.543 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:42:34.543 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:42:34.543 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:42:34.543 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:42:34.543 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:42:34.543 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:42:34.543 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:42:34.543 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:42:34.543 Removing: /var/run/dpdk/spdk0/hugepage_info 00:42:34.543 Removing: /var/run/dpdk/spdk1/config 00:42:34.543 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:42:34.543 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:42:34.543 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:42:34.543 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:42:34.543 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:42:34.543 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:42:34.543 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:42:34.543 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:42:34.544 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:42:34.544 Removing: /var/run/dpdk/spdk1/hugepage_info 00:42:34.544 Removing: /var/run/dpdk/spdk2/config 00:42:34.544 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:42:34.544 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:42:34.544 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:42:34.544 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:42:34.544 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:42:34.544 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:42:34.544 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:42:34.544 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:42:34.544 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:42:34.544 Removing: /var/run/dpdk/spdk2/hugepage_info 00:42:34.544 Removing: /var/run/dpdk/spdk3/config 00:42:34.544 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:42:34.544 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:42:34.544 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:42:34.544 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:42:34.544 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:42:34.544 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:42:34.544 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:42:34.544 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:42:34.544 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:42:34.544 Removing: /var/run/dpdk/spdk3/hugepage_info 00:42:34.544 Removing: /var/run/dpdk/spdk4/config 00:42:34.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:42:34.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:42:34.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:42:34.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:42:34.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:42:34.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:42:34.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:42:34.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:42:34.544 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:42:34.544 Removing: /var/run/dpdk/spdk4/hugepage_info 00:42:34.544 Removing: /dev/shm/bdev_svc_trace.1 00:42:34.544 Removing: /dev/shm/nvmf_trace.0 00:42:34.544 Removing: /dev/shm/spdk_tgt_trace.pid2152049 00:42:34.544 Removing: /var/run/dpdk/spdk0 00:42:34.544 Removing: /var/run/dpdk/spdk1 00:42:34.544 Removing: /var/run/dpdk/spdk2 00:42:34.544 Removing: /var/run/dpdk/spdk3 00:42:34.544 Removing: /var/run/dpdk/spdk4 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2150557 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2152049 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2152899 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2154040 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2154386 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2155827 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2156177 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2156388 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2157526 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2158309 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2158700 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2159101 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2159493 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2159784 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2159960 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2160307 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2160689 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2161782 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2165360 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2165724 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2166094 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2166174 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2166797 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2166816 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2167328 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2167526 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2167887 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2167968 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2168261 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2168430 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2169045 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2169248 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2169537 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2174317 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2179576 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2191638 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2192490 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2197582 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2197958 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2203361 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2210961 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2214068 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2226608 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2237677 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2239693 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2240885 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2262416 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2267366 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2324001 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2330483 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2337648 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2345529 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2345592 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2346621 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2347653 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2348716 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2349335 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2349395 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2349677 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2349742 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2349747 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2350746 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2351752 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2352777 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2353453 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2353476 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2353790 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2355232 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2356629 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2366933 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2401219 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2407090 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2409092 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2411430 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2411622 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2411807 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2412141 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2412869 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2415093 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2416287 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2416994 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2419627 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2420428 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2421119 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2426011 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2432648 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2432650 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2432652 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2437277 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2447518 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2452907 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2460390 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2461942 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2463570 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2465324 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2470932 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2476163 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2481195 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2490310 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2490394 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2495630 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2495756 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2496016 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2496624 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2496683 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2502069 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2502891 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2508545 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2511990 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2518655 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2525166 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2535192 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2543987 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2544030 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2567576 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2568287 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2569122 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2569958 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2570940 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2571702 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2572385 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2573071 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2578300 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2578588 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2585818 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2586144 00:42:34.544 Removing: /var/run/dpdk/spdk_pid2592664 00:42:34.805 Removing: /var/run/dpdk/spdk_pid2597693 00:42:34.805 Removing: /var/run/dpdk/spdk_pid2609884 00:42:34.805 Removing: /var/run/dpdk/spdk_pid2610560 00:42:34.805 Removing: /var/run/dpdk/spdk_pid2615603 00:42:34.805 Removing: /var/run/dpdk/spdk_pid2615956 00:42:34.805 Removing: /var/run/dpdk/spdk_pid2620993 00:42:34.805 Removing: /var/run/dpdk/spdk_pid2627870 00:42:34.805 Removing: /var/run/dpdk/spdk_pid2630872 00:42:34.805 Removing: /var/run/dpdk/spdk_pid2643249 00:42:34.805 Removing: /var/run/dpdk/spdk_pid2653850 00:42:34.805 Removing: /var/run/dpdk/spdk_pid2655818 00:42:34.805 Removing: /var/run/dpdk/spdk_pid2656926 00:42:34.805 Removing: /var/run/dpdk/spdk_pid2677116 00:42:34.805 Removing: /var/run/dpdk/spdk_pid2681849 00:42:34.805 Removing: /var/run/dpdk/spdk_pid2685056 00:42:34.805 Removing: /var/run/dpdk/spdk_pid2692798 00:42:34.805 Removing: /var/run/dpdk/spdk_pid2692831 00:42:34.805 Removing: /var/run/dpdk/spdk_pid2698715 00:42:34.805 Removing: /var/run/dpdk/spdk_pid2701063 00:42:34.805 Removing: /var/run/dpdk/spdk_pid2703427 00:42:34.805 Removing: /var/run/dpdk/spdk_pid2704690 00:42:34.805 Removing: /var/run/dpdk/spdk_pid2707145 00:42:34.805 Removing: /var/run/dpdk/spdk_pid2708611 00:42:34.805 Removing: /var/run/dpdk/spdk_pid2719186 00:42:34.805 Removing: /var/run/dpdk/spdk_pid2719719 00:42:34.805 Removing: /var/run/dpdk/spdk_pid2720253 00:42:34.805 Removing: /var/run/dpdk/spdk_pid2723150 00:42:34.805 Removing: /var/run/dpdk/spdk_pid2723817 00:42:34.805 Removing: /var/run/dpdk/spdk_pid2724397 00:42:34.805 Removing: /var/run/dpdk/spdk_pid2729011 00:42:34.805 Removing: /var/run/dpdk/spdk_pid2729065 00:42:34.805 Removing: /var/run/dpdk/spdk_pid2730870 00:42:34.805 Removing: /var/run/dpdk/spdk_pid2731339 00:42:34.805 Removing: /var/run/dpdk/spdk_pid2731646 00:42:34.805 Clean 00:42:34.805 07:37:45 -- common/autotest_common.sh@1453 -- # return 0 00:42:34.805 07:37:45 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:42:34.805 07:37:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:34.805 07:37:45 -- common/autotest_common.sh@10 -- # set +x 00:42:35.147 07:37:46 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:42:35.147 07:37:46 -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:35.147 07:37:46 -- common/autotest_common.sh@10 -- # set +x 00:42:35.147 07:37:46 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:42:35.147 07:37:46 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:42:35.147 07:37:46 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:42:35.147 07:37:46 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:42:35.147 07:37:46 -- spdk/autotest.sh@398 -- # hostname 00:42:35.147 07:37:46 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:42:35.147 geninfo: WARNING: invalid characters removed from testname! 00:43:01.895 07:38:11 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:03.805 07:38:14 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:05.717 07:38:16 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:07.629 07:38:18 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:09.022 07:38:20 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:10.935 07:38:21 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:12.320 07:38:23 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:43:12.320 07:38:23 -- spdk/autorun.sh@1 -- $ timing_finish 00:43:12.320 07:38:23 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:43:12.320 07:38:23 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:43:12.320 07:38:23 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:43:12.320 07:38:23 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:12.581 + [[ -n 2065726 ]] 00:43:12.581 + sudo kill 2065726 00:43:12.594 [Pipeline] } 00:43:12.609 [Pipeline] // stage 00:43:12.614 [Pipeline] } 00:43:12.629 [Pipeline] // timeout 00:43:12.634 [Pipeline] } 00:43:12.648 [Pipeline] // catchError 00:43:12.654 [Pipeline] } 00:43:12.670 [Pipeline] // wrap 00:43:12.675 [Pipeline] } 00:43:12.686 [Pipeline] // catchError 00:43:12.693 [Pipeline] stage 00:43:12.695 [Pipeline] { (Epilogue) 00:43:12.705 [Pipeline] catchError 00:43:12.706 [Pipeline] { 00:43:12.718 [Pipeline] echo 00:43:12.719 Cleanup processes 00:43:12.725 [Pipeline] sh 00:43:13.014 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:13.014 2744667 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:13.028 [Pipeline] sh 00:43:13.320 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:13.320 ++ grep -v 'sudo pgrep' 00:43:13.320 ++ awk '{print $1}' 00:43:13.320 + sudo kill -9 00:43:13.320 + true 00:43:13.332 [Pipeline] sh 00:43:13.618 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:43:25.855 [Pipeline] sh 00:43:26.144 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:43:26.144 Artifacts sizes are good 00:43:26.159 [Pipeline] archiveArtifacts 00:43:26.167 Archiving artifacts 00:43:26.347 [Pipeline] sh 00:43:26.698 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:43:26.715 [Pipeline] cleanWs 00:43:26.727 [WS-CLEANUP] Deleting project workspace... 00:43:26.727 [WS-CLEANUP] Deferred wipeout is used... 00:43:26.736 [WS-CLEANUP] done 00:43:26.738 [Pipeline] } 00:43:26.756 [Pipeline] // catchError 00:43:26.768 [Pipeline] sh 00:43:27.055 + logger -p user.info -t JENKINS-CI 00:43:27.065 [Pipeline] } 00:43:27.080 [Pipeline] // stage 00:43:27.085 [Pipeline] } 00:43:27.100 [Pipeline] // node 00:43:27.106 [Pipeline] End of Pipeline 00:43:27.140 Finished: SUCCESS